00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1064 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3726 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.149 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.257 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.693 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.704 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.714 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.714 > git config core.sparsecheckout # timeout=10 00:00:06.725 > git read-tree -mu HEAD # timeout=10 00:00:06.743 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.763 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.763 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.842 [Pipeline] Start of Pipeline 00:00:06.855 [Pipeline] library 00:00:06.857 Loading library shm_lib@master 00:00:06.857 Library shm_lib@master is cached. Copying from home. 00:00:06.879 [Pipeline] node 00:00:06.892 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.894 [Pipeline] { 00:00:06.904 [Pipeline] catchError 00:00:06.906 [Pipeline] { 00:00:06.916 [Pipeline] wrap 00:00:06.923 [Pipeline] { 00:00:06.929 [Pipeline] stage 00:00:06.931 [Pipeline] { (Prologue) 00:00:07.135 [Pipeline] sh 00:00:07.423 + logger -p user.info -t JENKINS-CI 00:00:07.441 [Pipeline] echo 00:00:07.443 Node: WFP4 00:00:07.450 [Pipeline] sh 00:00:07.748 [Pipeline] setCustomBuildProperty 00:00:07.758 [Pipeline] echo 00:00:07.759 Cleanup processes 00:00:07.763 [Pipeline] sh 00:00:08.047 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.047 674619 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.058 [Pipeline] sh 00:00:08.342 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.342 ++ grep -v 'sudo pgrep' 00:00:08.342 ++ awk '{print $1}' 00:00:08.342 + sudo kill -9 00:00:08.342 + true 00:00:08.355 [Pipeline] cleanWs 00:00:08.364 [WS-CLEANUP] Deleting project workspace... 00:00:08.364 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.370 [WS-CLEANUP] done 00:00:08.374 [Pipeline] setCustomBuildProperty 00:00:08.387 [Pipeline] sh 00:00:08.673 + sudo git config --global --replace-all safe.directory '*' 00:00:08.777 [Pipeline] httpRequest 00:00:09.700 [Pipeline] echo 00:00:09.701 Sorcerer 10.211.164.20 is alive 00:00:09.709 [Pipeline] retry 00:00:09.711 [Pipeline] { 00:00:09.723 [Pipeline] httpRequest 00:00:09.727 HttpMethod: GET 00:00:09.728 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.729 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.746 Response Code: HTTP/1.1 200 OK 00:00:09.747 Success: Status code 200 is in the accepted range: 200,404 00:00:09.747 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.693 [Pipeline] } 00:00:13.711 [Pipeline] // retry 00:00:13.718 [Pipeline] sh 00:00:14.006 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.022 [Pipeline] httpRequest 00:00:14.433 [Pipeline] echo 00:00:14.434 Sorcerer 10.211.164.20 is alive 00:00:14.443 [Pipeline] retry 00:00:14.445 [Pipeline] { 00:00:14.458 [Pipeline] httpRequest 00:00:14.463 HttpMethod: GET 00:00:14.463 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:14.464 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:14.477 Response Code: HTTP/1.1 200 OK 00:00:14.478 Success: Status code 200 is in the accepted range: 200,404 00:00:14.478 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:57.564 [Pipeline] } 00:01:57.578 [Pipeline] // retry 00:01:57.585 [Pipeline] sh 00:01:57.870 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:00.419 [Pipeline] sh 00:02:00.707 + git -C spdk log --oneline -n5 00:02:00.707 e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:00.707 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:00.707 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:00.707 66289a6db build: use VERSION file for storing version 00:02:00.707 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:00.725 [Pipeline] withCredentials 00:02:00.737 > git --version # timeout=10 00:02:00.750 > git --version # 'git version 2.39.2' 00:02:00.768 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:00.770 [Pipeline] { 00:02:00.780 [Pipeline] retry 00:02:00.782 [Pipeline] { 00:02:00.799 [Pipeline] sh 00:02:01.084 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:01.356 [Pipeline] } 00:02:01.373 [Pipeline] // retry 00:02:01.378 [Pipeline] } 00:02:01.397 [Pipeline] // withCredentials 00:02:01.406 [Pipeline] httpRequest 00:02:01.721 [Pipeline] echo 00:02:01.723 Sorcerer 10.211.164.20 is alive 00:02:01.732 [Pipeline] retry 00:02:01.735 [Pipeline] { 00:02:01.749 [Pipeline] httpRequest 00:02:01.753 HttpMethod: GET 00:02:01.754 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:01.754 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:01.758 Response Code: HTTP/1.1 200 OK 00:02:01.759 Success: Status code 200 is in the accepted range: 200,404 00:02:01.759 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.695 [Pipeline] } 00:02:04.731 [Pipeline] // retry 00:02:04.738 [Pipeline] sh 00:02:05.025 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:06.416 [Pipeline] sh 00:02:06.764 + git -C dpdk log --oneline -n5 00:02:06.764 eeb0605f11 version: 23.11.0 00:02:06.764 238778122a doc: update release notes for 23.11 00:02:06.764 46aa6b3cfc doc: fix description of RSS features 00:02:06.764 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:06.764 7e421ae345 devtools: support skipping forbid rule check 00:02:06.774 [Pipeline] } 00:02:06.788 [Pipeline] // stage 00:02:06.797 [Pipeline] stage 00:02:06.800 [Pipeline] { (Prepare) 00:02:06.821 [Pipeline] writeFile 00:02:06.836 [Pipeline] sh 00:02:07.123 + logger -p user.info -t JENKINS-CI 00:02:07.136 [Pipeline] sh 00:02:07.424 + logger -p user.info -t JENKINS-CI 00:02:07.437 [Pipeline] sh 00:02:07.724 + cat autorun-spdk.conf 00:02:07.724 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.724 SPDK_TEST_NVMF=1 00:02:07.724 SPDK_TEST_NVME_CLI=1 00:02:07.724 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.724 SPDK_TEST_NVMF_NICS=e810 00:02:07.724 SPDK_TEST_VFIOUSER=1 00:02:07.724 SPDK_RUN_UBSAN=1 00:02:07.724 NET_TYPE=phy 00:02:07.724 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:07.724 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.733 RUN_NIGHTLY=1 00:02:07.737 [Pipeline] readFile 00:02:07.766 [Pipeline] withEnv 00:02:07.768 [Pipeline] { 00:02:07.782 [Pipeline] sh 00:02:08.072 + set -ex 00:02:08.072 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:08.072 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:08.072 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.072 ++ SPDK_TEST_NVMF=1 00:02:08.072 ++ SPDK_TEST_NVME_CLI=1 00:02:08.072 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.072 ++ SPDK_TEST_NVMF_NICS=e810 00:02:08.072 ++ SPDK_TEST_VFIOUSER=1 00:02:08.072 ++ SPDK_RUN_UBSAN=1 00:02:08.072 ++ NET_TYPE=phy 00:02:08.072 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:08.072 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.072 ++ RUN_NIGHTLY=1 00:02:08.072 + case $SPDK_TEST_NVMF_NICS in 00:02:08.072 + DRIVERS=ice 00:02:08.072 + [[ tcp == \r\d\m\a ]] 00:02:08.072 + [[ -n ice ]] 00:02:08.072 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:08.072 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:08.072 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:08.072 rmmod: ERROR: Module i40iw is not currently loaded 00:02:08.072 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:08.072 + true 00:02:08.072 + for D in $DRIVERS 00:02:08.072 + sudo modprobe ice 00:02:08.072 + exit 0 00:02:08.082 [Pipeline] } 00:02:08.098 [Pipeline] // withEnv 00:02:08.104 [Pipeline] } 00:02:08.118 [Pipeline] // stage 00:02:08.129 [Pipeline] catchError 00:02:08.131 [Pipeline] { 00:02:08.146 [Pipeline] timeout 00:02:08.146 Timeout set to expire in 1 hr 0 min 00:02:08.148 [Pipeline] { 00:02:08.162 [Pipeline] stage 00:02:08.164 [Pipeline] { (Tests) 00:02:08.180 [Pipeline] sh 00:02:08.469 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.469 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.469 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.469 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:08.469 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.469 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:08.469 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:08.469 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:08.469 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:08.469 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:08.469 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:08.469 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:08.469 + source /etc/os-release 00:02:08.469 ++ NAME='Fedora Linux' 00:02:08.469 ++ VERSION='39 (Cloud Edition)' 00:02:08.469 ++ ID=fedora 00:02:08.469 ++ VERSION_ID=39 00:02:08.469 ++ VERSION_CODENAME= 00:02:08.469 ++ PLATFORM_ID=platform:f39 00:02:08.469 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.469 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.469 ++ LOGO=fedora-logo-icon 00:02:08.469 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.469 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.469 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.469 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.469 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.469 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.469 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.469 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.469 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.469 ++ SUPPORT_END=2024-11-12 00:02:08.469 ++ VARIANT='Cloud Edition' 00:02:08.469 ++ VARIANT_ID=cloud 00:02:08.469 + uname -a 00:02:08.469 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:08.469 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:11.015 Hugepages 00:02:11.015 node hugesize free / total 00:02:11.015 node0 1048576kB 0 / 0 00:02:11.015 node0 2048kB 0 / 0 00:02:11.015 node1 1048576kB 0 / 0 00:02:11.015 node1 2048kB 0 / 0 00:02:11.015 00:02:11.015 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.015 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:11.015 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:11.015 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:11.015 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:11.015 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:11.015 + rm -f /tmp/spdk-ld-path 00:02:11.015 + source autorun-spdk.conf 00:02:11.015 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.015 ++ SPDK_TEST_NVMF=1 00:02:11.015 ++ SPDK_TEST_NVME_CLI=1 00:02:11.015 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.015 ++ SPDK_TEST_NVMF_NICS=e810 00:02:11.015 ++ SPDK_TEST_VFIOUSER=1 00:02:11.015 ++ SPDK_RUN_UBSAN=1 00:02:11.015 ++ NET_TYPE=phy 00:02:11.015 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.016 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.016 ++ RUN_NIGHTLY=1 00:02:11.016 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.016 + [[ -n '' ]] 00:02:11.016 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.016 + for M in /var/spdk/build-*-manifest.txt 00:02:11.016 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:11.016 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:11.016 + for M in /var/spdk/build-*-manifest.txt 00:02:11.016 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.016 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:11.016 + for M in /var/spdk/build-*-manifest.txt 00:02:11.016 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.016 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:11.016 ++ uname 00:02:11.016 + [[ Linux == \L\i\n\u\x ]] 00:02:11.016 + sudo dmesg -T 00:02:11.016 + sudo dmesg --clear 00:02:11.276 + dmesg_pid=676112 00:02:11.276 + [[ Fedora Linux == FreeBSD ]] 00:02:11.276 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.276 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.276 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.276 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.276 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.276 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.276 + sudo dmesg -Tw 00:02:11.276 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.276 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.276 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.277 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.277 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.277 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.277 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.277 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.277 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.277 05:52:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.277 05:52:31 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.277 05:52:31 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:11.277 05:52:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:11.277 05:52:31 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.277 05:52:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.277 05:52:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.277 05:52:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:11.277 05:52:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.277 05:52:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.277 05:52:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.277 05:52:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.277 05:52:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.277 05:52:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.277 05:52:31 -- paths/export.sh@5 -- $ export PATH 00:02:11.277 05:52:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.277 05:52:31 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:11.277 05:52:31 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:11.277 05:52:31 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734238351.XXXXXX 00:02:11.277 05:52:31 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734238351.bORVbU 00:02:11.277 05:52:31 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:11.277 05:52:31 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:11.277 05:52:31 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.277 05:52:31 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:11.277 05:52:31 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:11.277 05:52:31 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.277 05:52:31 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:11.277 05:52:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:11.277 05:52:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.277 05:52:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:11.277 05:52:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:11.277 05:52:31 -- pm/common@17 -- $ local monitor 00:02:11.277 05:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.277 05:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.277 05:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.277 05:52:31 -- pm/common@21 -- $ date +%s 00:02:11.277 05:52:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.277 05:52:31 -- pm/common@21 -- $ date +%s 00:02:11.277 05:52:31 -- pm/common@25 -- $ sleep 1 00:02:11.277 05:52:31 -- pm/common@21 -- $ date +%s 00:02:11.277 05:52:31 -- pm/common@21 -- $ date +%s 00:02:11.277 05:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734238351 00:02:11.277 05:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734238351 00:02:11.277 05:52:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734238351 00:02:11.277 05:52:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734238351 00:02:11.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734238351_collect-cpu-load.pm.log 00:02:11.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734238351_collect-vmstat.pm.log 00:02:11.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734238351_collect-cpu-temp.pm.log 00:02:11.277 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734238351_collect-bmc-pm.bmc.pm.log 00:02:12.220 05:52:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:12.220 05:52:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.220 05:52:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.220 05:52:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.220 05:52:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.481 Sun Dec 15 04:52:32 AM UTC 2024 00:02:12.481 05:52:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.481 v25.01-rc1-2-ge01cb43b8 00:02:12.481 05:52:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:12.481 05:52:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.481 05:52:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.481 05:52:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.481 05:52:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.481 05:52:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.481 ************************************ 00:02:12.481 START TEST ubsan 00:02:12.481 ************************************ 00:02:12.481 05:52:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:12.481 using ubsan 00:02:12.481 00:02:12.481 real 0m0.000s 00:02:12.481 user 0m0.000s 00:02:12.481 sys 0m0.000s 00:02:12.481 05:52:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:12.481 05:52:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.481 ************************************ 00:02:12.481 END TEST ubsan 00:02:12.481 ************************************ 00:02:12.481 05:52:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:12.481 05:52:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:12.481 05:52:32 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:12.481 05:52:32 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:12.481 05:52:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.481 05:52:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.481 ************************************ 00:02:12.481 START TEST build_native_dpdk 00:02:12.481 ************************************ 00:02:12.481 05:52:32 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.481 05:52:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:12.481 eeb0605f11 version: 23.11.0 00:02:12.481 238778122a doc: update release notes for 23.11 00:02:12.481 46aa6b3cfc doc: fix description of RSS features 00:02:12.481 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:12.481 7e421ae345 devtools: support skipping forbid rule check 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:12.482 patching file config/rte_config.h 00:02:12.482 Hunk #1 succeeded at 60 (offset 1 line). 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:12.482 patching file lib/pcapng/rte_pcapng.c 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.482 05:52:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:12.482 05:52:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:17.768 The Meson build system 00:02:17.768 Version: 1.5.0 00:02:17.768 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:17.768 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:17.768 Build type: native build 00:02:17.768 Program cat found: YES (/usr/bin/cat) 00:02:17.768 Project name: DPDK 00:02:17.768 Project version: 23.11.0 00:02:17.768 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.768 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:17.768 Host machine cpu family: x86_64 00:02:17.768 Host machine cpu: x86_64 00:02:17.768 Message: ## Building in Developer Mode ## 00:02:17.768 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.768 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:17.768 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.768 Program python3 found: YES (/usr/bin/python3) 00:02:17.768 Program cat found: YES (/usr/bin/cat) 00:02:17.768 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:17.768 Compiler for C supports arguments -march=native: YES 00:02:17.768 Checking for size of "void *" : 8 00:02:17.768 Checking for size of "void *" : 8 (cached) 00:02:17.768 Library m found: YES 00:02:17.768 Library numa found: YES 00:02:17.768 Has header "numaif.h" : YES 00:02:17.768 Library fdt found: NO 00:02:17.768 Library execinfo found: NO 00:02:17.768 Has header "execinfo.h" : YES 00:02:17.768 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.768 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.768 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.768 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.768 Run-time dependency openssl found: YES 3.1.1 00:02:17.768 Run-time dependency libpcap found: YES 1.10.4 00:02:17.768 Has header "pcap.h" with dependency libpcap: YES 00:02:17.768 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.768 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.768 Compiler for C supports arguments -Wformat: YES 00:02:17.768 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.768 Compiler for C supports arguments -Wformat-security: NO 00:02:17.768 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.768 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.768 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.768 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.768 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.768 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.768 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.768 Compiler for C supports arguments -Wundef: YES 00:02:17.768 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.768 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.768 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.768 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.768 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.768 Program objdump found: YES (/usr/bin/objdump) 00:02:17.768 Compiler for C supports arguments -mavx512f: YES 00:02:17.768 Checking if "AVX512 checking" compiles: YES 00:02:17.768 Fetching value of define "__SSE4_2__" : 1 00:02:17.768 Fetching value of define "__AES__" : 1 00:02:17.768 Fetching value of define "__AVX__" : 1 00:02:17.768 Fetching value of define "__AVX2__" : 1 00:02:17.768 Fetching value of define "__AVX512BW__" : 1 00:02:17.768 Fetching value of define "__AVX512CD__" : 1 00:02:17.768 Fetching value of define "__AVX512DQ__" : 1 00:02:17.768 Fetching value of define "__AVX512F__" : 1 00:02:17.768 Fetching value of define "__AVX512VL__" : 1 00:02:17.768 Fetching value of define "__PCLMUL__" : 1 00:02:17.768 Fetching value of define "__RDRND__" : 1 00:02:17.768 Fetching value of define "__RDSEED__" : 1 00:02:17.768 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.768 Fetching value of define "__znver1__" : (undefined) 00:02:17.768 Fetching value of define "__znver2__" : (undefined) 00:02:17.768 Fetching value of define "__znver3__" : (undefined) 00:02:17.768 Fetching value of define "__znver4__" : (undefined) 00:02:17.768 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.768 Message: lib/log: Defining dependency "log" 00:02:17.768 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.768 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.768 Checking for function "getentropy" : NO 00:02:17.768 Message: lib/eal: Defining dependency "eal" 00:02:17.768 Message: lib/ring: Defining dependency "ring" 00:02:17.768 Message: lib/rcu: Defining dependency "rcu" 00:02:17.768 Message: lib/mempool: Defining dependency "mempool" 00:02:17.768 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.768 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.768 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.768 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.768 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.768 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.769 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.769 Compiler for C supports arguments -mpclmul: YES 00:02:17.769 Compiler for C supports arguments -maes: YES 00:02:17.769 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.769 Compiler for C supports arguments -mavx512bw: YES 00:02:17.769 Compiler for C supports arguments -mavx512dq: YES 00:02:17.769 Compiler for C supports arguments -mavx512vl: YES 00:02:17.769 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.769 Compiler for C supports arguments -mavx2: YES 00:02:17.769 Compiler for C supports arguments -mavx: YES 00:02:17.769 Message: lib/net: Defining dependency "net" 00:02:17.769 Message: lib/meter: Defining dependency "meter" 00:02:17.769 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.769 Message: lib/pci: Defining dependency "pci" 00:02:17.769 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.769 Message: lib/metrics: Defining dependency "metrics" 00:02:17.769 Message: lib/hash: Defining dependency "hash" 00:02:17.769 Message: lib/timer: Defining dependency "timer" 00:02:17.769 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.769 Message: lib/acl: Defining dependency "acl" 00:02:17.769 Message: lib/bbdev: Defining dependency "bbdev" 00:02:17.769 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:17.769 Run-time dependency libelf found: YES 0.191 00:02:17.769 Message: lib/bpf: Defining dependency "bpf" 00:02:17.769 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:17.769 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.769 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.769 Message: lib/distributor: Defining dependency "distributor" 00:02:17.769 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.769 Message: lib/efd: Defining dependency "efd" 00:02:17.769 Message: lib/eventdev: Defining dependency "eventdev" 00:02:17.769 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:17.769 Message: lib/gpudev: Defining dependency "gpudev" 00:02:17.769 Message: lib/gro: Defining dependency "gro" 00:02:17.769 Message: lib/gso: Defining dependency "gso" 00:02:17.769 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:17.769 Message: lib/jobstats: Defining dependency "jobstats" 00:02:17.769 Message: lib/latencystats: Defining dependency "latencystats" 00:02:17.769 Message: lib/lpm: Defining dependency "lpm" 00:02:17.769 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:17.769 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:17.769 Message: lib/member: Defining dependency "member" 00:02:17.769 Message: lib/pcapng: Defining dependency "pcapng" 00:02:17.769 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.769 Message: lib/power: Defining dependency "power" 00:02:17.769 Message: lib/rawdev: Defining dependency "rawdev" 00:02:17.769 Message: lib/regexdev: Defining dependency "regexdev" 00:02:17.769 Message: lib/mldev: Defining dependency "mldev" 00:02:17.769 Message: lib/rib: Defining dependency "rib" 00:02:17.769 Message: lib/reorder: Defining dependency "reorder" 00:02:17.769 Message: lib/sched: Defining dependency "sched" 00:02:17.769 Message: lib/security: Defining dependency "security" 00:02:17.769 Message: lib/stack: Defining dependency "stack" 00:02:17.769 Has header "linux/userfaultfd.h" : YES 00:02:17.769 Has header "linux/vduse.h" : YES 00:02:17.769 Message: lib/vhost: Defining dependency "vhost" 00:02:17.769 Message: lib/ipsec: Defining dependency "ipsec" 00:02:17.769 Message: lib/pdcp: Defining dependency "pdcp" 00:02:17.769 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.769 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.769 Message: lib/fib: Defining dependency "fib" 00:02:17.769 Message: lib/port: Defining dependency "port" 00:02:17.769 Message: lib/pdump: Defining dependency "pdump" 00:02:17.769 Message: lib/table: Defining dependency "table" 00:02:17.769 Message: lib/pipeline: Defining dependency "pipeline" 00:02:17.769 Message: lib/graph: Defining dependency "graph" 00:02:17.769 Message: lib/node: Defining dependency "node" 00:02:17.769 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.719 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.719 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.719 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.719 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:18.719 Compiler for C supports arguments -Wno-unused-value: YES 00:02:18.719 Compiler for C supports arguments -Wno-format: YES 00:02:18.719 Compiler for C supports arguments -Wno-format-security: YES 00:02:18.719 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:18.719 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:18.719 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:18.719 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:18.719 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:18.719 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:18.719 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.719 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:18.719 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:18.719 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:18.719 Has header "sys/epoll.h" : YES 00:02:18.719 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:18.719 Configuring doxy-api-html.conf using configuration 00:02:18.719 Configuring doxy-api-man.conf using configuration 00:02:18.719 Program mandb found: YES (/usr/bin/mandb) 00:02:18.719 Program sphinx-build found: NO 00:02:18.719 Configuring rte_build_config.h using configuration 00:02:18.719 Message: 00:02:18.719 ================= 00:02:18.719 Applications Enabled 00:02:18.719 ================= 00:02:18.719 00:02:18.719 apps: 00:02:18.719 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:18.719 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:18.719 test-pmd, test-regex, test-sad, test-security-perf, 00:02:18.719 00:02:18.719 Message: 00:02:18.719 ================= 00:02:18.719 Libraries Enabled 00:02:18.719 ================= 00:02:18.719 00:02:18.719 libs: 00:02:18.719 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:18.719 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:18.719 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:18.719 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:18.719 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:18.719 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:18.719 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:18.719 00:02:18.719 00:02:18.719 Message: 00:02:18.719 =============== 00:02:18.719 Drivers Enabled 00:02:18.719 =============== 00:02:18.719 00:02:18.719 common: 00:02:18.719 00:02:18.719 bus: 00:02:18.719 pci, vdev, 00:02:18.719 mempool: 00:02:18.719 ring, 00:02:18.719 dma: 00:02:18.719 00:02:18.719 net: 00:02:18.719 i40e, 00:02:18.719 raw: 00:02:18.719 00:02:18.719 crypto: 00:02:18.719 00:02:18.719 compress: 00:02:18.719 00:02:18.719 regex: 00:02:18.719 00:02:18.719 ml: 00:02:18.719 00:02:18.719 vdpa: 00:02:18.719 00:02:18.719 event: 00:02:18.719 00:02:18.719 baseband: 00:02:18.719 00:02:18.719 gpu: 00:02:18.719 00:02:18.719 00:02:18.719 Message: 00:02:18.719 ================= 00:02:18.719 Content Skipped 00:02:18.719 ================= 00:02:18.719 00:02:18.719 apps: 00:02:18.719 00:02:18.719 libs: 00:02:18.719 00:02:18.719 drivers: 00:02:18.719 common/cpt: not in enabled drivers build config 00:02:18.719 common/dpaax: not in enabled drivers build config 00:02:18.719 common/iavf: not in enabled drivers build config 00:02:18.719 common/idpf: not in enabled drivers build config 00:02:18.719 common/mvep: not in enabled drivers build config 00:02:18.719 common/octeontx: not in enabled drivers build config 00:02:18.719 bus/auxiliary: not in enabled drivers build config 00:02:18.719 bus/cdx: not in enabled drivers build config 00:02:18.719 bus/dpaa: not in enabled drivers build config 00:02:18.719 bus/fslmc: not in enabled drivers build config 00:02:18.719 bus/ifpga: not in enabled drivers build config 00:02:18.720 bus/platform: not in enabled drivers build config 00:02:18.720 bus/vmbus: not in enabled drivers build config 00:02:18.720 common/cnxk: not in enabled drivers build config 00:02:18.720 common/mlx5: not in enabled drivers build config 00:02:18.720 common/nfp: not in enabled drivers build config 00:02:18.720 common/qat: not in enabled drivers build config 00:02:18.720 common/sfc_efx: not in enabled drivers build config 00:02:18.720 mempool/bucket: not in enabled drivers build config 00:02:18.720 mempool/cnxk: not in enabled drivers build config 00:02:18.720 mempool/dpaa: not in enabled drivers build config 00:02:18.720 mempool/dpaa2: not in enabled drivers build config 00:02:18.720 mempool/octeontx: not in enabled drivers build config 00:02:18.720 mempool/stack: not in enabled drivers build config 00:02:18.720 dma/cnxk: not in enabled drivers build config 00:02:18.720 dma/dpaa: not in enabled drivers build config 00:02:18.720 dma/dpaa2: not in enabled drivers build config 00:02:18.720 dma/hisilicon: not in enabled drivers build config 00:02:18.720 dma/idxd: not in enabled drivers build config 00:02:18.720 dma/ioat: not in enabled drivers build config 00:02:18.720 dma/skeleton: not in enabled drivers build config 00:02:18.720 net/af_packet: not in enabled drivers build config 00:02:18.720 net/af_xdp: not in enabled drivers build config 00:02:18.720 net/ark: not in enabled drivers build config 00:02:18.720 net/atlantic: not in enabled drivers build config 00:02:18.720 net/avp: not in enabled drivers build config 00:02:18.720 net/axgbe: not in enabled drivers build config 00:02:18.720 net/bnx2x: not in enabled drivers build config 00:02:18.720 net/bnxt: not in enabled drivers build config 00:02:18.720 net/bonding: not in enabled drivers build config 00:02:18.720 net/cnxk: not in enabled drivers build config 00:02:18.720 net/cpfl: not in enabled drivers build config 00:02:18.720 net/cxgbe: not in enabled drivers build config 00:02:18.720 net/dpaa: not in enabled drivers build config 00:02:18.720 net/dpaa2: not in enabled drivers build config 00:02:18.720 net/e1000: not in enabled drivers build config 00:02:18.720 net/ena: not in enabled drivers build config 00:02:18.720 net/enetc: not in enabled drivers build config 00:02:18.720 net/enetfec: not in enabled drivers build config 00:02:18.720 net/enic: not in enabled drivers build config 00:02:18.720 net/failsafe: not in enabled drivers build config 00:02:18.720 net/fm10k: not in enabled drivers build config 00:02:18.720 net/gve: not in enabled drivers build config 00:02:18.720 net/hinic: not in enabled drivers build config 00:02:18.720 net/hns3: not in enabled drivers build config 00:02:18.720 net/iavf: not in enabled drivers build config 00:02:18.720 net/ice: not in enabled drivers build config 00:02:18.720 net/idpf: not in enabled drivers build config 00:02:18.720 net/igc: not in enabled drivers build config 00:02:18.720 net/ionic: not in enabled drivers build config 00:02:18.720 net/ipn3ke: not in enabled drivers build config 00:02:18.720 net/ixgbe: not in enabled drivers build config 00:02:18.720 net/mana: not in enabled drivers build config 00:02:18.720 net/memif: not in enabled drivers build config 00:02:18.720 net/mlx4: not in enabled drivers build config 00:02:18.720 net/mlx5: not in enabled drivers build config 00:02:18.720 net/mvneta: not in enabled drivers build config 00:02:18.720 net/mvpp2: not in enabled drivers build config 00:02:18.720 net/netvsc: not in enabled drivers build config 00:02:18.720 net/nfb: not in enabled drivers build config 00:02:18.720 net/nfp: not in enabled drivers build config 00:02:18.720 net/ngbe: not in enabled drivers build config 00:02:18.720 net/null: not in enabled drivers build config 00:02:18.720 net/octeontx: not in enabled drivers build config 00:02:18.720 net/octeon_ep: not in enabled drivers build config 00:02:18.720 net/pcap: not in enabled drivers build config 00:02:18.720 net/pfe: not in enabled drivers build config 00:02:18.720 net/qede: not in enabled drivers build config 00:02:18.720 net/ring: not in enabled drivers build config 00:02:18.720 net/sfc: not in enabled drivers build config 00:02:18.720 net/softnic: not in enabled drivers build config 00:02:18.720 net/tap: not in enabled drivers build config 00:02:18.720 net/thunderx: not in enabled drivers build config 00:02:18.720 net/txgbe: not in enabled drivers build config 00:02:18.720 net/vdev_netvsc: not in enabled drivers build config 00:02:18.720 net/vhost: not in enabled drivers build config 00:02:18.720 net/virtio: not in enabled drivers build config 00:02:18.720 net/vmxnet3: not in enabled drivers build config 00:02:18.720 raw/cnxk_bphy: not in enabled drivers build config 00:02:18.720 raw/cnxk_gpio: not in enabled drivers build config 00:02:18.720 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:18.720 raw/ifpga: not in enabled drivers build config 00:02:18.720 raw/ntb: not in enabled drivers build config 00:02:18.720 raw/skeleton: not in enabled drivers build config 00:02:18.720 crypto/armv8: not in enabled drivers build config 00:02:18.720 crypto/bcmfs: not in enabled drivers build config 00:02:18.720 crypto/caam_jr: not in enabled drivers build config 00:02:18.720 crypto/ccp: not in enabled drivers build config 00:02:18.720 crypto/cnxk: not in enabled drivers build config 00:02:18.720 crypto/dpaa_sec: not in enabled drivers build config 00:02:18.720 crypto/dpaa2_sec: not in enabled drivers build config 00:02:18.720 crypto/ipsec_mb: not in enabled drivers build config 00:02:18.720 crypto/mlx5: not in enabled drivers build config 00:02:18.720 crypto/mvsam: not in enabled drivers build config 00:02:18.720 crypto/nitrox: not in enabled drivers build config 00:02:18.720 crypto/null: not in enabled drivers build config 00:02:18.720 crypto/octeontx: not in enabled drivers build config 00:02:18.720 crypto/openssl: not in enabled drivers build config 00:02:18.720 crypto/scheduler: not in enabled drivers build config 00:02:18.720 crypto/uadk: not in enabled drivers build config 00:02:18.720 crypto/virtio: not in enabled drivers build config 00:02:18.720 compress/isal: not in enabled drivers build config 00:02:18.720 compress/mlx5: not in enabled drivers build config 00:02:18.720 compress/octeontx: not in enabled drivers build config 00:02:18.720 compress/zlib: not in enabled drivers build config 00:02:18.720 regex/mlx5: not in enabled drivers build config 00:02:18.720 regex/cn9k: not in enabled drivers build config 00:02:18.720 ml/cnxk: not in enabled drivers build config 00:02:18.720 vdpa/ifc: not in enabled drivers build config 00:02:18.720 vdpa/mlx5: not in enabled drivers build config 00:02:18.720 vdpa/nfp: not in enabled drivers build config 00:02:18.720 vdpa/sfc: not in enabled drivers build config 00:02:18.720 event/cnxk: not in enabled drivers build config 00:02:18.720 event/dlb2: not in enabled drivers build config 00:02:18.720 event/dpaa: not in enabled drivers build config 00:02:18.720 event/dpaa2: not in enabled drivers build config 00:02:18.720 event/dsw: not in enabled drivers build config 00:02:18.720 event/opdl: not in enabled drivers build config 00:02:18.720 event/skeleton: not in enabled drivers build config 00:02:18.720 event/sw: not in enabled drivers build config 00:02:18.720 event/octeontx: not in enabled drivers build config 00:02:18.720 baseband/acc: not in enabled drivers build config 00:02:18.720 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:18.720 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:18.720 baseband/la12xx: not in enabled drivers build config 00:02:18.720 baseband/null: not in enabled drivers build config 00:02:18.720 baseband/turbo_sw: not in enabled drivers build config 00:02:18.720 gpu/cuda: not in enabled drivers build config 00:02:18.720 00:02:18.720 00:02:18.720 Build targets in project: 217 00:02:18.720 00:02:18.720 DPDK 23.11.0 00:02:18.720 00:02:18.720 User defined options 00:02:18.720 libdir : lib 00:02:18.720 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:18.720 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:18.720 c_link_args : 00:02:18.720 enable_docs : false 00:02:18.720 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:18.720 enable_kmods : false 00:02:18.720 machine : native 00:02:18.720 tests : false 00:02:18.720 00:02:18.720 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.720 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:18.720 05:52:38 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:18.720 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:18.720 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.720 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.720 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.720 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.720 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.721 [6/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.721 [7/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:18.990 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.990 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.990 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.990 [11/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.990 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.990 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.990 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.990 [15/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.990 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.990 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.990 [18/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.990 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.990 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.990 [21/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.990 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.990 [23/707] Linking static target lib/librte_kvargs.a 00:02:18.990 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.990 [25/707] Linking static target lib/librte_pci.a 00:02:18.990 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.990 [27/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.990 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.990 [29/707] Linking static target lib/librte_log.a 00:02:18.990 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.990 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.990 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.259 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.259 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.259 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.259 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.259 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.259 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.259 [39/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.259 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.259 [41/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.521 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.521 [43/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.521 [44/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.521 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.521 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.521 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.521 [48/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.521 [49/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.521 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.521 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.521 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.521 [53/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.521 [54/707] Linking static target lib/librte_ring.a 00:02:19.521 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.521 [56/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.521 [57/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.521 [58/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.521 [59/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.521 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.521 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.521 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.521 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.521 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.521 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.521 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.521 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.521 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.521 [69/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.521 [70/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.521 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.521 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.521 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.521 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.521 [75/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.521 [76/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.790 [77/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.790 [78/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.790 [79/707] Linking static target lib/librte_meter.a 00:02:19.791 [80/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.791 [81/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.791 [82/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.791 [83/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.791 [84/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.791 [85/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.791 [86/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.791 [87/707] Linking static target lib/librte_cmdline.a 00:02:19.791 [88/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.791 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.791 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.791 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.791 [92/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.791 [93/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.791 [94/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.791 [95/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.791 [96/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.791 [97/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.791 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.791 [99/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.791 [100/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.791 [101/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.791 [102/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.791 [103/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.791 [104/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.791 [105/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.791 [106/707] Linking static target lib/librte_metrics.a 00:02:19.791 [107/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.791 [108/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.791 [109/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.791 [110/707] Linking static target lib/librte_net.a 00:02:20.051 [111/707] Linking target lib/librte_log.so.24.0 00:02:20.051 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.051 [113/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.051 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.051 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.051 [116/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.051 [117/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:20.051 [118/707] Linking static target lib/librte_cfgfile.a 00:02:20.051 [119/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.051 [120/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.051 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.051 [122/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.051 [123/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:20.051 [124/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:20.051 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:20.051 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:20.051 [127/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:20.051 [128/707] Linking static target lib/librte_bitratestats.a 00:02:20.314 [129/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:20.314 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.314 [131/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.314 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.314 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.314 [134/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.314 [135/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.314 [136/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.314 [137/707] Linking target lib/librte_kvargs.so.24.0 00:02:20.314 [138/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.315 [139/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.315 [140/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:20.315 [141/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:20.315 [142/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.315 [143/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.315 [144/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.315 [145/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.315 [146/707] Linking static target lib/librte_mempool.a 00:02:20.315 [147/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.315 [148/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:20.579 [149/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.579 [150/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:20.579 [151/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.579 [152/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:20.579 [153/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:20.579 [154/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:20.579 [155/707] Linking static target lib/librte_timer.a 00:02:20.579 [156/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.579 [157/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:20.579 [158/707] Linking static target lib/librte_compressdev.a 00:02:20.579 [159/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.579 [160/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:20.579 [161/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:20.579 [162/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.579 [163/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.579 [164/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:20.579 [165/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:20.579 [166/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.579 [167/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.579 [168/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:20.579 [169/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.579 [170/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.579 [171/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.579 [172/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:20.579 [173/707] Linking static target lib/librte_rcu.a 00:02:20.579 [174/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.579 [175/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:20.579 [176/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:20.579 [177/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:20.579 [178/707] Linking static target lib/librte_jobstats.a 00:02:20.579 [179/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:20.579 [180/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.579 [181/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:20.844 [182/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:20.844 [183/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:20.844 [184/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:20.844 [185/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:20.844 [186/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:20.844 [187/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:20.844 [188/707] Linking static target lib/librte_dispatcher.a 00:02:20.844 [189/707] Linking static target lib/librte_bbdev.a 00:02:20.844 [190/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:20.844 [191/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:20.844 [192/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.844 [193/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.844 [194/707] Linking static target lib/librte_dmadev.a 00:02:20.844 [195/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.844 [196/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:20.844 [197/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.844 [198/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:20.844 [199/707] Linking static target lib/librte_mbuf.a 00:02:20.844 [200/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.844 [201/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:20.844 [202/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.844 [203/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:20.844 [204/707] Linking static target lib/librte_latencystats.a 00:02:20.844 [205/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:20.844 [206/707] Linking static target lib/librte_gro.a 00:02:20.844 [207/707] Linking static target lib/librte_gpudev.a 00:02:20.844 [208/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:21.105 [209/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.105 [210/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.105 [211/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:21.105 [212/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:21.105 [213/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:21.105 [214/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.105 [215/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:21.105 [216/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.105 [217/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:21.105 [218/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:21.105 [219/707] Linking static target lib/librte_telemetry.a 00:02:21.105 [220/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:21.105 [221/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:21.105 [222/707] Linking static target lib/librte_eal.a 00:02:21.105 [223/707] Linking static target lib/librte_gso.a 00:02:21.105 [224/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.105 [225/707] Linking static target lib/librte_distributor.a 00:02:21.105 [226/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:21.105 [227/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:21.105 [228/707] Linking static target lib/librte_ip_frag.a 00:02:21.105 [229/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.105 [230/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.105 [231/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:21.105 [232/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.105 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:21.105 [234/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:21.105 [235/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.105 [236/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:21.105 [237/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:21.105 [238/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:21.105 [239/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:21.105 [240/707] Linking static target lib/librte_stack.a 00:02:21.369 [241/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.369 [242/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:21.369 [243/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.369 [244/707] Linking static target lib/librte_regexdev.a 00:02:21.369 [245/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.369 [246/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:21.369 [247/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.369 [248/707] Linking static target lib/librte_rawdev.a 00:02:21.369 [249/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:21.369 [250/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.370 [251/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:21.370 [252/707] Linking static target lib/librte_pcapng.a 00:02:21.370 [253/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.370 [254/707] Linking static target lib/librte_mldev.a 00:02:21.370 [255/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:21.370 [256/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:21.370 [257/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:21.370 [258/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [259/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:21.636 [260/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.636 [261/707] Linking static target lib/librte_power.a 00:02:21.636 [262/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.636 [263/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:21.636 [264/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:21.636 [265/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:21.636 [266/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.636 [267/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [268/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:21.636 [269/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [270/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:21.636 [272/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [273/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [274/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.636 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.636 [276/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:21.636 [277/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:21.636 [278/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.636 [279/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:21.636 [280/707] Linking static target lib/librte_reorder.a 00:02:21.636 [281/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.636 [282/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:21.636 [283/707] Linking static target lib/librte_security.a 00:02:21.636 [284/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:21.636 [285/707] Linking static target lib/librte_bpf.a 00:02:21.636 [286/707] Linking static target lib/librte_lpm.a 00:02:21.636 [287/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.907 [288/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:21.907 [289/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:21.907 [290/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.907 [291/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.907 [292/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.907 [293/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:21.907 [294/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:21.907 [295/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.907 [296/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.907 [297/707] Linking target lib/librte_telemetry.so.24.0 00:02:21.907 [298/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:21.907 [299/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:21.907 [300/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.907 [301/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:22.172 [302/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:22.172 [303/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:22.172 [304/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.172 [305/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.172 [306/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:22.172 [307/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:22.172 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:22.172 [309/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.172 [310/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:22.172 [311/707] Linking static target lib/librte_rib.a 00:02:22.172 [312/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:22.173 [313/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:22.173 [314/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:22.173 [315/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:22.173 [316/707] Linking static target lib/librte_efd.a 00:02:22.173 [317/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.173 [318/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.173 [319/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:22.173 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:22.173 [321/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.173 [322/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.173 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:22.173 [324/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:22.173 [325/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:22.437 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:22.437 [327/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:22.437 [328/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:22.437 [329/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:22.437 [330/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:22.437 [331/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.437 [332/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.437 [333/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:22.437 [334/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:22.437 [335/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:22.437 [336/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:22.437 [337/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:22.437 [338/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:22.437 [339/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:22.710 [340/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.710 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:22.710 [342/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.710 [343/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:22.710 [344/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:22.710 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:22.710 [346/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.710 [347/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:22.710 [348/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:22.710 [349/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:22.710 [350/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:22.710 [351/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:22.710 [352/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:22.710 [353/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.710 [354/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:22.710 [355/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:22.710 [356/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.710 [357/707] Linking static target lib/librte_fib.a 00:02:22.710 [358/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:22.710 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:22.710 [360/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.710 [361/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:22.972 [362/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:22.972 [363/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:22.972 [364/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:22.972 [365/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:22.972 [366/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:22.972 [367/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.972 [368/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:22.972 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.972 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.972 [371/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:22.972 [372/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:22.972 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:22.972 [374/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:22.972 [375/707] Linking static target lib/librte_pdump.a 00:02:22.972 [376/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:22.972 [377/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.972 [378/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.244 [379/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:23.244 [380/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:23.244 [381/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.244 [382/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.244 [383/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.244 [384/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:23.244 [385/707] Linking static target lib/librte_graph.a 00:02:23.244 [386/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:23.244 [387/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:23.244 [388/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.244 [389/707] Linking static target lib/librte_cryptodev.a 00:02:23.244 [390/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:23.511 [391/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:23.511 [392/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:23.511 [393/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:23.511 [394/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:23.511 [395/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:23.511 [396/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:23.511 [397/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.511 [398/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:23.511 [399/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:23.511 [400/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.511 [401/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:23.511 [402/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:23.511 [403/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.511 [404/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:23.511 [405/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:23.511 [406/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:23.511 [407/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:23.511 [408/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.511 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:23.511 [410/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:23.511 [411/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:23.511 [412/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.511 [413/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:23.511 [414/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:23.511 [415/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.511 [416/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:23.511 [417/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:23.511 [418/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.511 [419/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.511 [420/707] Linking static target drivers/librte_bus_vdev.a 00:02:23.511 [421/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:23.774 [422/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:23.774 [423/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:23.774 [424/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:23.774 [425/707] Linking static target lib/librte_table.a 00:02:23.774 [426/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:23.774 [427/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:23.774 [428/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:23.774 [429/707] Linking static target lib/librte_ipsec.a 00:02:23.774 [430/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:23.774 [431/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:23.774 [432/707] Linking static target lib/librte_sched.a 00:02:23.774 [433/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:23.774 [434/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:23.774 [435/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.774 [436/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:23.774 [437/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:23.774 [438/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.774 [439/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:23.774 [440/707] Linking static target drivers/librte_bus_pci.a 00:02:24.043 [441/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.043 [442/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.043 [443/707] Linking static target lib/librte_hash.a 00:02:24.043 [444/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:24.043 [445/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:24.043 [446/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:24.043 [447/707] Linking static target lib/librte_member.a 00:02:24.043 [448/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.043 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:24.043 [450/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:24.043 [451/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.043 [452/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:24.043 [453/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:24.043 [454/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:24.043 [455/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:24.043 [456/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:24.043 [457/707] Linking static target lib/librte_node.a 00:02:24.043 [458/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:24.043 [459/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.043 [460/707] Linking static target lib/acl/libavx2_tmp.a 00:02:24.311 [461/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:24.311 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:24.311 [463/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:24.311 [464/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:24.311 [465/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:24.311 [466/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:24.311 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:24.311 [468/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:24.311 [469/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.311 [470/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.311 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:24.311 [472/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:24.311 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:24.311 [474/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:24.311 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:24.311 [476/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:24.311 [477/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:24.311 [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:24.311 [479/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:24.574 [480/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:24.574 [481/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.574 [482/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:24.575 [483/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:24.575 [484/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:24.575 [485/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.575 [486/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.575 [487/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:24.575 [488/707] Linking static target lib/librte_eventdev.a 00:02:24.575 [489/707] Linking static target drivers/librte_mempool_ring.a 00:02:24.575 [490/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:24.575 [491/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:24.575 [492/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:24.575 [493/707] Linking static target lib/librte_port.a 00:02:24.575 [494/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:24.575 [495/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.575 [496/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:24.575 [497/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:24.575 [498/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.575 [499/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:24.575 [500/707] Linking static target lib/librte_pdcp.a 00:02:24.575 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:24.575 [502/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:24.575 [503/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:24.575 [504/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:24.575 [505/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:24.575 [506/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:24.575 [507/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.575 [508/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:24.835 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:24.835 [510/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:24.835 [511/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:24.835 [512/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:24.835 [513/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:24.835 [514/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [515/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:24.835 [516/707] Linking static target lib/librte_acl.a 00:02:24.835 [517/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:24.835 [518/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:24.835 [519/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:24.835 [520/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:24.835 [521/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:24.835 [522/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:24.835 [523/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [524/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.835 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:24.835 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:24.835 [527/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:25.094 [528/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:25.094 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:25.094 [531/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:25.094 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:25.094 [533/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:25.094 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:25.094 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:25.094 [537/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:25.094 [538/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:25.094 [539/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.094 [540/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:25.094 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:25.094 [542/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:25.094 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:25.094 [544/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:25.094 [545/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:25.353 [546/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:25.353 [547/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:25.353 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:25.353 [549/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:25.353 [550/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.353 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:25.353 [552/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:25.353 [553/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:25.353 [554/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:25.353 [555/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:25.353 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:25.353 [557/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:25.353 [558/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:25.353 [559/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:25.353 [560/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:25.353 [561/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:25.353 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:25.353 [563/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:25.613 [564/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:25.613 [565/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:25.613 [566/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:25.613 [567/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:25.613 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:25.613 [569/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:25.872 [570/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.872 [571/707] Linking static target lib/librte_ethdev.a 00:02:25.872 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:25.872 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:26.132 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:26.391 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:26.651 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:26.651 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:26.912 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:26.912 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:27.171 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:27.431 [581/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.692 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:27.692 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:27.692 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:28.261 [585/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.261 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:28.261 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.261 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.261 [589/707] Linking static target drivers/librte_net_i40e.a 00:02:29.200 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.460 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:29.719 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:31.629 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.629 [594/707] Linking target lib/librte_eal.so.24.0 00:02:31.629 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:31.629 [596/707] Linking target lib/librte_ring.so.24.0 00:02:31.629 [597/707] Linking target lib/librte_meter.so.24.0 00:02:31.629 [598/707] Linking target lib/librte_pci.so.24.0 00:02:31.629 [599/707] Linking target lib/librte_timer.so.24.0 00:02:31.629 [600/707] Linking target lib/librte_cfgfile.so.24.0 00:02:31.629 [601/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:31.629 [602/707] Linking target lib/librte_jobstats.so.24.0 00:02:31.629 [603/707] Linking target lib/librte_dmadev.so.24.0 00:02:31.629 [604/707] Linking target lib/librte_stack.so.24.0 00:02:31.629 [605/707] Linking target lib/librte_rawdev.so.24.0 00:02:31.629 [606/707] Linking target lib/librte_acl.so.24.0 00:02:31.890 [607/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:31.890 [608/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:31.890 [609/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:31.890 [610/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:31.890 [611/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:31.890 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:31.890 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:31.891 [614/707] Linking target lib/librte_mempool.so.24.0 00:02:31.891 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:31.891 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:32.150 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:32.150 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:32.150 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:32.150 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:32.150 [621/707] Linking target lib/librte_rib.so.24.0 00:02:32.150 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:32.150 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:32.150 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:32.150 [625/707] Linking target lib/librte_sched.so.24.0 00:02:32.150 [626/707] Linking target lib/librte_bbdev.so.24.0 00:02:32.150 [627/707] Linking target lib/librte_cryptodev.so.24.0 00:02:32.150 [628/707] Linking target lib/librte_net.so.24.0 00:02:32.150 [629/707] Linking target lib/librte_compressdev.so.24.0 00:02:32.150 [630/707] Linking target lib/librte_distributor.so.24.0 00:02:32.150 [631/707] Linking target lib/librte_gpudev.so.24.0 00:02:32.150 [632/707] Linking target lib/librte_regexdev.so.24.0 00:02:32.150 [633/707] Linking target lib/librte_reorder.so.24.0 00:02:32.150 [634/707] Linking target lib/librte_mldev.so.24.0 00:02:32.150 [635/707] Linking target lib/librte_fib.so.24.0 00:02:32.410 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.410 [637/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:32.410 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:32.410 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:32.410 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:32.410 [641/707] Linking target lib/librte_hash.so.24.0 00:02:32.410 [642/707] Linking target lib/librte_security.so.24.0 00:02:32.669 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.669 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:32.669 [645/707] Linking target lib/librte_efd.so.24.0 00:02:32.669 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:32.669 [647/707] Linking target lib/librte_member.so.24.0 00:02:32.669 [648/707] Linking target lib/librte_pdcp.so.24.0 00:02:32.669 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:32.669 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:32.669 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:33.239 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.239 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:33.499 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:33.499 [655/707] Linking target lib/librte_ip_frag.so.24.0 00:02:33.499 [656/707] Linking target lib/librte_bpf.so.24.0 00:02:33.499 [657/707] Linking target lib/librte_metrics.so.24.0 00:02:33.499 [658/707] Linking target lib/librte_pcapng.so.24.0 00:02:33.499 [659/707] Linking target lib/librte_gso.so.24.0 00:02:33.499 [660/707] Linking target lib/librte_gro.so.24.0 00:02:33.499 [661/707] Linking target lib/librte_power.so.24.0 00:02:33.499 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:33.499 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:33.759 [664/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:33.759 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:33.759 [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:33.759 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:33.759 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:33.759 [669/707] Linking target lib/librte_graph.so.24.0 00:02:33.759 [670/707] Linking target lib/librte_latencystats.so.24.0 00:02:33.759 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:02:33.759 [672/707] Linking target lib/librte_pdump.so.24.0 00:02:33.759 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:33.759 [674/707] Linking target lib/librte_port.so.24.0 00:02:33.759 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:33.759 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:34.020 [677/707] Linking target lib/librte_node.so.24.0 00:02:34.020 [678/707] Linking target lib/librte_table.so.24.0 00:02:34.020 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:36.561 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:36.561 [681/707] Linking static target lib/librte_pipeline.a 00:02:36.562 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.821 [683/707] Linking static target lib/librte_vhost.a 00:02:37.080 [684/707] Linking target app/dpdk-proc-info 00:02:37.080 [685/707] Linking target app/dpdk-test-regex 00:02:37.080 [686/707] Linking target app/dpdk-test-cmdline 00:02:37.080 [687/707] Linking target app/dpdk-test-dma-perf 00:02:37.080 [688/707] Linking target app/dpdk-test-sad 00:02:37.080 [689/707] Linking target app/dpdk-test-gpudev 00:02:37.080 [690/707] Linking target app/dpdk-test-acl 00:02:37.080 [691/707] Linking target app/dpdk-test-compress-perf 00:02:37.080 [692/707] Linking target app/dpdk-test-flow-perf 00:02:37.080 [693/707] Linking target app/dpdk-test-crypto-perf 00:02:37.080 [694/707] Linking target app/dpdk-test-fib 00:02:37.080 [695/707] Linking target app/dpdk-graph 00:02:37.080 [696/707] Linking target app/dpdk-pdump 00:02:37.080 [697/707] Linking target app/dpdk-dumpcap 00:02:37.080 [698/707] Linking target app/dpdk-test-pipeline 00:02:37.080 [699/707] Linking target app/dpdk-test-mldev 00:02:37.080 [700/707] Linking target app/dpdk-test-security-perf 00:02:37.080 [701/707] Linking target app/dpdk-test-bbdev 00:02:37.080 [702/707] Linking target app/dpdk-test-eventdev 00:02:37.340 [703/707] Linking target app/dpdk-testpmd 00:02:38.724 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.724 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:41.268 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.528 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:41.528 05:53:01 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:41.528 05:53:01 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:41.528 05:53:01 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:41.528 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:41.528 [0/1] Installing files. 00:02:41.793 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.793 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.797 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.798 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:41.799 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:41.799 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.799 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:42.065 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:42.065 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:42.065 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.065 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:42.065 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.068 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:42.069 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:42.069 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:42.069 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:42.069 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:42.069 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:42.069 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:42.069 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:42.069 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:42.069 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:42.069 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:42.069 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:42.069 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:42.069 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:42.069 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:42.070 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:42.070 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:42.070 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:42.070 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:42.070 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:42.070 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:42.070 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:42.070 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:42.070 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:42.070 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:42.070 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:42.070 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:42.070 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:42.070 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:42.070 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:42.070 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:42.070 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:42.070 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:42.070 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:42.070 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:42.070 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:42.070 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:42.070 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:42.070 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:42.070 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:42.070 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:42.070 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:42.070 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:42.070 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:42.070 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:42.070 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:42.070 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:42.070 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:42.070 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:42.070 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:42.070 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:42.070 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:42.070 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:42.070 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:42.070 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:42.070 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:42.070 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:42.070 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:42.070 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:42.070 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:42.070 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:42.070 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:42.070 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:42.070 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:42.070 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:42.070 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:42.070 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:42.070 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:42.070 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:42.070 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:42.070 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:42.070 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:42.070 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:42.070 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:42.070 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:42.070 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:42.070 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:42.070 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:42.070 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:42.070 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:42.070 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:42.070 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:42.070 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:42.070 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:42.070 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:42.070 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:42.070 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:42.070 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:42.070 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:42.070 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:42.070 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:42.070 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:42.070 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:42.070 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:42.070 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:42.070 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:42.070 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:42.070 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:42.070 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:42.070 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:42.070 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:42.070 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:42.070 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:42.071 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:42.071 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:42.071 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:42.071 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:42.071 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:42.071 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:42.071 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:42.071 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:42.071 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:42.071 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:42.071 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:42.071 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:42.071 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:42.071 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:42.071 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:42.071 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:42.071 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:42.071 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:42.071 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:42.071 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:42.071 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:42.071 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:42.071 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:42.071 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:42.071 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:42.071 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:42.071 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:42.071 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:42.071 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:42.071 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:42.071 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:42.071 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:42.071 05:53:02 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:42.071 05:53:02 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.071 00:02:42.071 real 0m29.701s 00:02:42.071 user 9m23.181s 00:02:42.071 sys 2m13.709s 00:02:42.071 05:53:02 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:42.071 05:53:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:42.071 ************************************ 00:02:42.071 END TEST build_native_dpdk 00:02:42.071 ************************************ 00:02:42.331 05:53:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:42.331 05:53:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:42.331 05:53:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:42.331 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:42.591 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:42.591 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:42.591 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:42.850 Using 'verbs' RDMA provider 00:02:56.013 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:08.238 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:08.238 Creating mk/config.mk...done. 00:03:08.238 Creating mk/cc.flags.mk...done. 00:03:08.238 Type 'make' to build. 00:03:08.238 05:53:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:08.238 05:53:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.238 05:53:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.238 05:53:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.238 ************************************ 00:03:08.238 START TEST make 00:03:08.238 ************************************ 00:03:08.238 05:53:27 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:10.160 The Meson build system 00:03:10.160 Version: 1.5.0 00:03:10.160 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:10.160 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:10.160 Build type: native build 00:03:10.160 Project name: libvfio-user 00:03:10.160 Project version: 0.0.1 00:03:10.160 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:10.160 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:10.160 Host machine cpu family: x86_64 00:03:10.160 Host machine cpu: x86_64 00:03:10.160 Run-time dependency threads found: YES 00:03:10.160 Library dl found: YES 00:03:10.160 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:10.160 Run-time dependency json-c found: YES 0.17 00:03:10.160 Run-time dependency cmocka found: YES 1.1.7 00:03:10.160 Program pytest-3 found: NO 00:03:10.160 Program flake8 found: NO 00:03:10.160 Program misspell-fixer found: NO 00:03:10.160 Program restructuredtext-lint found: NO 00:03:10.160 Program valgrind found: YES (/usr/bin/valgrind) 00:03:10.160 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.160 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.160 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.160 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.160 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:10.160 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:10.160 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.160 Build targets in project: 8 00:03:10.160 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:10.160 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:10.160 00:03:10.160 libvfio-user 0.0.1 00:03:10.160 00:03:10.160 User defined options 00:03:10.160 buildtype : debug 00:03:10.160 default_library: shared 00:03:10.160 libdir : /usr/local/lib 00:03:10.160 00:03:10.160 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.420 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:10.679 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:10.679 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:10.679 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:10.679 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:10.679 [5/37] Compiling C object samples/null.p/null.c.o 00:03:10.679 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:10.679 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:10.679 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:10.679 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:10.679 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:10.679 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:10.679 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:10.679 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:10.679 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:10.679 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:10.679 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:10.679 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:10.679 [18/37] Compiling C object samples/server.p/server.c.o 00:03:10.679 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:10.679 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:10.679 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:10.679 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:10.679 [23/37] Compiling C object samples/client.p/client.c.o 00:03:10.679 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:10.679 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:10.679 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:10.679 [27/37] Linking target samples/client 00:03:10.679 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:10.679 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:10.679 [30/37] Linking target test/unit_tests 00:03:10.679 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:10.939 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:10.939 [33/37] Linking target samples/lspci 00:03:10.939 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:10.939 [35/37] Linking target samples/null 00:03:10.939 [36/37] Linking target samples/gpio-pci-idio-16 00:03:10.939 [37/37] Linking target samples/server 00:03:10.939 INFO: autodetecting backend as ninja 00:03:10.939 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:10.939 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.510 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:11.510 ninja: no work to do. 00:03:38.072 CC lib/log/log.o 00:03:38.072 CC lib/log/log_flags.o 00:03:38.072 CC lib/log/log_deprecated.o 00:03:38.072 CC lib/ut_mock/mock.o 00:03:38.072 CC lib/ut/ut.o 00:03:38.072 LIB libspdk_ut_mock.a 00:03:38.072 LIB libspdk_ut.a 00:03:38.072 LIB libspdk_log.a 00:03:38.072 SO libspdk_ut_mock.so.6.0 00:03:38.072 SO libspdk_ut.so.2.0 00:03:38.072 SO libspdk_log.so.7.1 00:03:38.072 SYMLINK libspdk_ut_mock.so 00:03:38.072 SYMLINK libspdk_ut.so 00:03:38.072 SYMLINK libspdk_log.so 00:03:38.330 CC lib/util/base64.o 00:03:38.330 CC lib/util/bit_array.o 00:03:38.330 CC lib/util/cpuset.o 00:03:38.330 CC lib/util/crc16.o 00:03:38.330 CC lib/util/crc32.o 00:03:38.330 CC lib/util/crc32c.o 00:03:38.330 CC lib/util/crc64.o 00:03:38.330 CC lib/util/crc32_ieee.o 00:03:38.330 CXX lib/trace_parser/trace.o 00:03:38.330 CC lib/dma/dma.o 00:03:38.330 CC lib/util/dif.o 00:03:38.330 CC lib/util/fd.o 00:03:38.330 CC lib/util/fd_group.o 00:03:38.330 CC lib/util/file.o 00:03:38.330 CC lib/util/hexlify.o 00:03:38.330 CC lib/ioat/ioat.o 00:03:38.330 CC lib/util/iov.o 00:03:38.330 CC lib/util/math.o 00:03:38.330 CC lib/util/net.o 00:03:38.330 CC lib/util/pipe.o 00:03:38.330 CC lib/util/strerror_tls.o 00:03:38.330 CC lib/util/string.o 00:03:38.330 CC lib/util/uuid.o 00:03:38.330 CC lib/util/xor.o 00:03:38.330 CC lib/util/zipf.o 00:03:38.330 CC lib/util/md5.o 00:03:38.330 CC lib/vfio_user/host/vfio_user.o 00:03:38.330 CC lib/vfio_user/host/vfio_user_pci.o 00:03:38.589 LIB libspdk_dma.a 00:03:38.589 SO libspdk_dma.so.5.0 00:03:38.589 LIB libspdk_ioat.a 00:03:38.589 SYMLINK libspdk_dma.so 00:03:38.589 SO libspdk_ioat.so.7.0 00:03:38.589 SYMLINK libspdk_ioat.so 00:03:38.589 LIB libspdk_vfio_user.a 00:03:38.589 SO libspdk_vfio_user.so.5.0 00:03:38.847 LIB libspdk_util.a 00:03:38.847 SYMLINK libspdk_vfio_user.so 00:03:38.847 SO libspdk_util.so.10.1 00:03:38.847 SYMLINK libspdk_util.so 00:03:39.107 LIB libspdk_trace_parser.a 00:03:39.107 SO libspdk_trace_parser.so.6.0 00:03:39.107 SYMLINK libspdk_trace_parser.so 00:03:39.367 CC lib/json/json_parse.o 00:03:39.367 CC lib/conf/conf.o 00:03:39.367 CC lib/json/json_util.o 00:03:39.367 CC lib/env_dpdk/env.o 00:03:39.367 CC lib/json/json_write.o 00:03:39.367 CC lib/env_dpdk/memory.o 00:03:39.367 CC lib/env_dpdk/pci.o 00:03:39.367 CC lib/env_dpdk/init.o 00:03:39.367 CC lib/env_dpdk/threads.o 00:03:39.367 CC lib/env_dpdk/pci_ioat.o 00:03:39.367 CC lib/idxd/idxd.o 00:03:39.367 CC lib/idxd/idxd_user.o 00:03:39.367 CC lib/rdma_utils/rdma_utils.o 00:03:39.367 CC lib/env_dpdk/pci_virtio.o 00:03:39.367 CC lib/idxd/idxd_kernel.o 00:03:39.367 CC lib/vmd/vmd.o 00:03:39.367 CC lib/env_dpdk/pci_vmd.o 00:03:39.367 CC lib/vmd/led.o 00:03:39.367 CC lib/env_dpdk/pci_idxd.o 00:03:39.367 CC lib/env_dpdk/pci_event.o 00:03:39.367 CC lib/env_dpdk/sigbus_handler.o 00:03:39.367 CC lib/env_dpdk/pci_dpdk.o 00:03:39.367 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.367 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.626 LIB libspdk_conf.a 00:03:39.626 SO libspdk_conf.so.6.0 00:03:39.626 LIB libspdk_json.a 00:03:39.626 LIB libspdk_rdma_utils.a 00:03:39.626 SYMLINK libspdk_conf.so 00:03:39.627 SO libspdk_json.so.6.0 00:03:39.627 SO libspdk_rdma_utils.so.1.0 00:03:39.627 SYMLINK libspdk_json.so 00:03:39.627 SYMLINK libspdk_rdma_utils.so 00:03:39.885 LIB libspdk_idxd.a 00:03:39.885 SO libspdk_idxd.so.12.1 00:03:39.885 LIB libspdk_vmd.a 00:03:39.885 SYMLINK libspdk_idxd.so 00:03:39.885 SO libspdk_vmd.so.6.0 00:03:39.885 SYMLINK libspdk_vmd.so 00:03:40.144 CC lib/rdma_provider/common.o 00:03:40.144 CC lib/jsonrpc/jsonrpc_server.o 00:03:40.145 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:40.145 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:40.145 CC lib/jsonrpc/jsonrpc_client.o 00:03:40.145 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:40.145 LIB libspdk_rdma_provider.a 00:03:40.404 LIB libspdk_jsonrpc.a 00:03:40.404 SO libspdk_rdma_provider.so.7.0 00:03:40.404 SO libspdk_jsonrpc.so.6.0 00:03:40.404 SYMLINK libspdk_rdma_provider.so 00:03:40.404 LIB libspdk_env_dpdk.a 00:03:40.404 SYMLINK libspdk_jsonrpc.so 00:03:40.404 SO libspdk_env_dpdk.so.15.1 00:03:40.663 SYMLINK libspdk_env_dpdk.so 00:03:40.663 CC lib/rpc/rpc.o 00:03:40.922 LIB libspdk_rpc.a 00:03:40.922 SO libspdk_rpc.so.6.0 00:03:40.922 SYMLINK libspdk_rpc.so 00:03:41.490 CC lib/trace/trace.o 00:03:41.490 CC lib/trace/trace_flags.o 00:03:41.490 CC lib/trace/trace_rpc.o 00:03:41.490 CC lib/notify/notify.o 00:03:41.490 CC lib/keyring/keyring.o 00:03:41.490 CC lib/notify/notify_rpc.o 00:03:41.490 CC lib/keyring/keyring_rpc.o 00:03:41.490 LIB libspdk_notify.a 00:03:41.490 SO libspdk_notify.so.6.0 00:03:41.490 LIB libspdk_keyring.a 00:03:41.749 LIB libspdk_trace.a 00:03:41.749 SO libspdk_keyring.so.2.0 00:03:41.749 SO libspdk_trace.so.11.0 00:03:41.749 SYMLINK libspdk_notify.so 00:03:41.749 SYMLINK libspdk_keyring.so 00:03:41.749 SYMLINK libspdk_trace.so 00:03:42.008 CC lib/sock/sock.o 00:03:42.008 CC lib/sock/sock_rpc.o 00:03:42.008 CC lib/thread/thread.o 00:03:42.008 CC lib/thread/iobuf.o 00:03:42.267 LIB libspdk_sock.a 00:03:42.526 SO libspdk_sock.so.10.0 00:03:42.526 SYMLINK libspdk_sock.so 00:03:42.785 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.785 CC lib/nvme/nvme_ctrlr.o 00:03:42.785 CC lib/nvme/nvme_fabric.o 00:03:42.785 CC lib/nvme/nvme_ns_cmd.o 00:03:42.785 CC lib/nvme/nvme_ns.o 00:03:42.785 CC lib/nvme/nvme_pcie_common.o 00:03:42.785 CC lib/nvme/nvme_pcie.o 00:03:42.785 CC lib/nvme/nvme_qpair.o 00:03:42.785 CC lib/nvme/nvme.o 00:03:42.785 CC lib/nvme/nvme_quirks.o 00:03:42.785 CC lib/nvme/nvme_transport.o 00:03:42.785 CC lib/nvme/nvme_discovery.o 00:03:42.785 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.785 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.785 CC lib/nvme/nvme_tcp.o 00:03:42.785 CC lib/nvme/nvme_opal.o 00:03:42.785 CC lib/nvme/nvme_io_msg.o 00:03:42.785 CC lib/nvme/nvme_poll_group.o 00:03:42.785 CC lib/nvme/nvme_zns.o 00:03:42.785 CC lib/nvme/nvme_stubs.o 00:03:42.785 CC lib/nvme/nvme_auth.o 00:03:42.785 CC lib/nvme/nvme_cuse.o 00:03:42.785 CC lib/nvme/nvme_vfio_user.o 00:03:42.785 CC lib/nvme/nvme_rdma.o 00:03:43.044 LIB libspdk_thread.a 00:03:43.302 SO libspdk_thread.so.11.0 00:03:43.302 SYMLINK libspdk_thread.so 00:03:43.561 CC lib/virtio/virtio.o 00:03:43.561 CC lib/virtio/virtio_vhost_user.o 00:03:43.561 CC lib/virtio/virtio_vfio_user.o 00:03:43.561 CC lib/virtio/virtio_pci.o 00:03:43.561 CC lib/accel/accel.o 00:03:43.561 CC lib/accel/accel_rpc.o 00:03:43.561 CC lib/accel/accel_sw.o 00:03:43.561 CC lib/fsdev/fsdev.o 00:03:43.561 CC lib/fsdev/fsdev_io.o 00:03:43.561 CC lib/fsdev/fsdev_rpc.o 00:03:43.561 CC lib/init/json_config.o 00:03:43.561 CC lib/init/subsystem.o 00:03:43.561 CC lib/init/subsystem_rpc.o 00:03:43.561 CC lib/blob/blobstore.o 00:03:43.561 CC lib/init/rpc.o 00:03:43.561 CC lib/blob/request.o 00:03:43.561 CC lib/blob/blob_bs_dev.o 00:03:43.561 CC lib/blob/zeroes.o 00:03:43.561 CC lib/vfu_tgt/tgt_endpoint.o 00:03:43.561 CC lib/vfu_tgt/tgt_rpc.o 00:03:43.820 LIB libspdk_init.a 00:03:43.820 SO libspdk_init.so.6.0 00:03:43.820 LIB libspdk_virtio.a 00:03:44.078 LIB libspdk_vfu_tgt.a 00:03:44.078 SO libspdk_virtio.so.7.0 00:03:44.078 SYMLINK libspdk_init.so 00:03:44.078 SO libspdk_vfu_tgt.so.3.0 00:03:44.078 SYMLINK libspdk_virtio.so 00:03:44.078 SYMLINK libspdk_vfu_tgt.so 00:03:44.078 LIB libspdk_fsdev.a 00:03:44.078 SO libspdk_fsdev.so.2.0 00:03:44.337 SYMLINK libspdk_fsdev.so 00:03:44.337 CC lib/event/app.o 00:03:44.337 CC lib/event/reactor.o 00:03:44.337 CC lib/event/log_rpc.o 00:03:44.337 CC lib/event/app_rpc.o 00:03:44.337 CC lib/event/scheduler_static.o 00:03:44.337 LIB libspdk_accel.a 00:03:44.596 SO libspdk_accel.so.16.0 00:03:44.596 LIB libspdk_nvme.a 00:03:44.596 SYMLINK libspdk_accel.so 00:03:44.596 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:44.596 SO libspdk_nvme.so.15.0 00:03:44.596 LIB libspdk_event.a 00:03:44.596 SO libspdk_event.so.14.0 00:03:44.854 SYMLINK libspdk_event.so 00:03:44.854 SYMLINK libspdk_nvme.so 00:03:44.854 CC lib/bdev/bdev.o 00:03:44.854 CC lib/bdev/bdev_rpc.o 00:03:44.854 CC lib/bdev/bdev_zone.o 00:03:44.854 CC lib/bdev/part.o 00:03:44.854 CC lib/bdev/scsi_nvme.o 00:03:45.115 LIB libspdk_fuse_dispatcher.a 00:03:45.115 SO libspdk_fuse_dispatcher.so.1.0 00:03:45.115 SYMLINK libspdk_fuse_dispatcher.so 00:03:45.685 LIB libspdk_blob.a 00:03:45.944 SO libspdk_blob.so.12.0 00:03:45.944 SYMLINK libspdk_blob.so 00:03:46.202 CC lib/lvol/lvol.o 00:03:46.202 CC lib/blobfs/blobfs.o 00:03:46.202 CC lib/blobfs/tree.o 00:03:46.769 LIB libspdk_bdev.a 00:03:46.769 SO libspdk_bdev.so.17.0 00:03:47.028 LIB libspdk_blobfs.a 00:03:47.028 SYMLINK libspdk_bdev.so 00:03:47.028 SO libspdk_blobfs.so.11.0 00:03:47.028 LIB libspdk_lvol.a 00:03:47.028 SYMLINK libspdk_blobfs.so 00:03:47.028 SO libspdk_lvol.so.11.0 00:03:47.028 SYMLINK libspdk_lvol.so 00:03:47.288 CC lib/ublk/ublk.o 00:03:47.288 CC lib/ublk/ublk_rpc.o 00:03:47.288 CC lib/scsi/dev.o 00:03:47.288 CC lib/scsi/lun.o 00:03:47.288 CC lib/nvmf/ctrlr.o 00:03:47.288 CC lib/scsi/port.o 00:03:47.288 CC lib/nvmf/ctrlr_discovery.o 00:03:47.288 CC lib/scsi/scsi.o 00:03:47.288 CC lib/nvmf/ctrlr_bdev.o 00:03:47.288 CC lib/scsi/scsi_bdev.o 00:03:47.288 CC lib/nvmf/subsystem.o 00:03:47.288 CC lib/scsi/scsi_pr.o 00:03:47.288 CC lib/scsi/scsi_rpc.o 00:03:47.288 CC lib/ftl/ftl_core.o 00:03:47.288 CC lib/nvmf/nvmf.o 00:03:47.288 CC lib/scsi/task.o 00:03:47.288 CC lib/nvmf/nvmf_rpc.o 00:03:47.288 CC lib/ftl/ftl_init.o 00:03:47.288 CC lib/nvmf/transport.o 00:03:47.288 CC lib/ftl/ftl_layout.o 00:03:47.288 CC lib/nvmf/tcp.o 00:03:47.288 CC lib/nvmf/stubs.o 00:03:47.288 CC lib/ftl/ftl_debug.o 00:03:47.288 CC lib/nbd/nbd.o 00:03:47.288 CC lib/ftl/ftl_io.o 00:03:47.288 CC lib/ftl/ftl_sb.o 00:03:47.288 CC lib/nbd/nbd_rpc.o 00:03:47.288 CC lib/nvmf/mdns_server.o 00:03:47.288 CC lib/ftl/ftl_l2p.o 00:03:47.288 CC lib/nvmf/rdma.o 00:03:47.288 CC lib/nvmf/vfio_user.o 00:03:47.288 CC lib/nvmf/auth.o 00:03:47.288 CC lib/ftl/ftl_l2p_flat.o 00:03:47.288 CC lib/ftl/ftl_nv_cache.o 00:03:47.288 CC lib/ftl/ftl_band.o 00:03:47.288 CC lib/ftl/ftl_band_ops.o 00:03:47.288 CC lib/ftl/ftl_writer.o 00:03:47.288 CC lib/ftl/ftl_rq.o 00:03:47.288 CC lib/ftl/ftl_reloc.o 00:03:47.288 CC lib/ftl/ftl_l2p_cache.o 00:03:47.288 CC lib/ftl/ftl_p2l.o 00:03:47.288 CC lib/ftl/ftl_p2l_log.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:47.288 CC lib/ftl/utils/ftl_md.o 00:03:47.288 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:47.288 CC lib/ftl/utils/ftl_conf.o 00:03:47.288 CC lib/ftl/utils/ftl_mempool.o 00:03:47.288 CC lib/ftl/utils/ftl_bitmap.o 00:03:47.288 CC lib/ftl/utils/ftl_property.o 00:03:47.288 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:47.288 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:47.288 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:47.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:47.288 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:47.288 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:47.288 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:47.288 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.288 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.288 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.288 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:47.288 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.288 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:47.288 CC lib/ftl/base/ftl_base_dev.o 00:03:47.288 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.288 CC lib/ftl/ftl_trace.o 00:03:47.859 LIB libspdk_nbd.a 00:03:47.859 SO libspdk_nbd.so.7.0 00:03:47.859 SYMLINK libspdk_nbd.so 00:03:48.117 LIB libspdk_scsi.a 00:03:48.117 SO libspdk_scsi.so.9.0 00:03:48.117 LIB libspdk_ublk.a 00:03:48.117 SO libspdk_ublk.so.3.0 00:03:48.117 SYMLINK libspdk_scsi.so 00:03:48.117 SYMLINK libspdk_ublk.so 00:03:48.376 LIB libspdk_ftl.a 00:03:48.376 SO libspdk_ftl.so.9.0 00:03:48.376 CC lib/iscsi/conn.o 00:03:48.376 CC lib/vhost/vhost.o 00:03:48.376 CC lib/iscsi/init_grp.o 00:03:48.376 CC lib/iscsi/iscsi.o 00:03:48.376 CC lib/vhost/vhost_rpc.o 00:03:48.376 CC lib/iscsi/param.o 00:03:48.376 CC lib/vhost/vhost_scsi.o 00:03:48.376 CC lib/vhost/vhost_blk.o 00:03:48.376 CC lib/iscsi/portal_grp.o 00:03:48.376 CC lib/vhost/rte_vhost_user.o 00:03:48.376 CC lib/iscsi/tgt_node.o 00:03:48.376 CC lib/iscsi/iscsi_subsystem.o 00:03:48.376 CC lib/iscsi/iscsi_rpc.o 00:03:48.376 CC lib/iscsi/task.o 00:03:48.635 SYMLINK libspdk_ftl.so 00:03:49.203 LIB libspdk_nvmf.a 00:03:49.203 SO libspdk_nvmf.so.20.0 00:03:49.203 LIB libspdk_vhost.a 00:03:49.203 SO libspdk_vhost.so.8.0 00:03:49.463 SYMLINK libspdk_nvmf.so 00:03:49.463 SYMLINK libspdk_vhost.so 00:03:49.463 LIB libspdk_iscsi.a 00:03:49.463 SO libspdk_iscsi.so.8.0 00:03:49.721 SYMLINK libspdk_iscsi.so 00:03:50.288 CC module/vfu_device/vfu_virtio.o 00:03:50.288 CC module/vfu_device/vfu_virtio_blk.o 00:03:50.288 CC module/vfu_device/vfu_virtio_scsi.o 00:03:50.288 CC module/env_dpdk/env_dpdk_rpc.o 00:03:50.289 CC module/vfu_device/vfu_virtio_fs.o 00:03:50.289 CC module/vfu_device/vfu_virtio_rpc.o 00:03:50.289 CC module/accel/ioat/accel_ioat.o 00:03:50.289 CC module/accel/ioat/accel_ioat_rpc.o 00:03:50.289 CC module/accel/iaa/accel_iaa.o 00:03:50.289 CC module/accel/iaa/accel_iaa_rpc.o 00:03:50.289 CC module/accel/dsa/accel_dsa.o 00:03:50.289 CC module/accel/dsa/accel_dsa_rpc.o 00:03:50.289 LIB libspdk_env_dpdk_rpc.a 00:03:50.289 CC module/keyring/file/keyring_rpc.o 00:03:50.289 CC module/keyring/file/keyring.o 00:03:50.289 CC module/keyring/linux/keyring.o 00:03:50.289 CC module/keyring/linux/keyring_rpc.o 00:03:50.289 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:50.289 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:50.289 CC module/blob/bdev/blob_bdev.o 00:03:50.289 CC module/sock/posix/posix.o 00:03:50.289 CC module/scheduler/gscheduler/gscheduler.o 00:03:50.289 CC module/fsdev/aio/fsdev_aio.o 00:03:50.289 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:50.289 CC module/fsdev/aio/linux_aio_mgr.o 00:03:50.289 CC module/accel/error/accel_error.o 00:03:50.289 CC module/accel/error/accel_error_rpc.o 00:03:50.289 SO libspdk_env_dpdk_rpc.so.6.0 00:03:50.546 SYMLINK libspdk_env_dpdk_rpc.so 00:03:50.546 LIB libspdk_keyring_file.a 00:03:50.546 LIB libspdk_keyring_linux.a 00:03:50.546 LIB libspdk_scheduler_dpdk_governor.a 00:03:50.546 LIB libspdk_scheduler_gscheduler.a 00:03:50.546 LIB libspdk_accel_ioat.a 00:03:50.546 SO libspdk_keyring_linux.so.1.0 00:03:50.546 SO libspdk_keyring_file.so.2.0 00:03:50.546 SO libspdk_scheduler_gscheduler.so.4.0 00:03:50.546 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:50.546 LIB libspdk_accel_error.a 00:03:50.546 SO libspdk_accel_ioat.so.6.0 00:03:50.546 LIB libspdk_scheduler_dynamic.a 00:03:50.546 LIB libspdk_accel_iaa.a 00:03:50.546 SO libspdk_scheduler_dynamic.so.4.0 00:03:50.546 SYMLINK libspdk_scheduler_gscheduler.so 00:03:50.546 SO libspdk_accel_error.so.2.0 00:03:50.546 SYMLINK libspdk_keyring_file.so 00:03:50.546 SYMLINK libspdk_keyring_linux.so 00:03:50.546 SO libspdk_accel_iaa.so.3.0 00:03:50.546 LIB libspdk_accel_dsa.a 00:03:50.546 LIB libspdk_blob_bdev.a 00:03:50.546 SYMLINK libspdk_accel_ioat.so 00:03:50.546 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:50.805 SO libspdk_blob_bdev.so.12.0 00:03:50.805 SYMLINK libspdk_accel_error.so 00:03:50.805 SO libspdk_accel_dsa.so.5.0 00:03:50.805 SYMLINK libspdk_scheduler_dynamic.so 00:03:50.805 SYMLINK libspdk_accel_iaa.so 00:03:50.805 LIB libspdk_vfu_device.a 00:03:50.805 SYMLINK libspdk_blob_bdev.so 00:03:50.805 SO libspdk_vfu_device.so.3.0 00:03:50.805 SYMLINK libspdk_accel_dsa.so 00:03:50.805 SYMLINK libspdk_vfu_device.so 00:03:50.805 LIB libspdk_fsdev_aio.a 00:03:51.065 SO libspdk_fsdev_aio.so.1.0 00:03:51.065 LIB libspdk_sock_posix.a 00:03:51.065 SYMLINK libspdk_fsdev_aio.so 00:03:51.065 SO libspdk_sock_posix.so.6.0 00:03:51.065 SYMLINK libspdk_sock_posix.so 00:03:51.324 CC module/bdev/split/vbdev_split_rpc.o 00:03:51.324 CC module/bdev/split/vbdev_split.o 00:03:51.324 CC module/bdev/lvol/vbdev_lvol.o 00:03:51.324 CC module/blobfs/bdev/blobfs_bdev.o 00:03:51.324 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:51.324 CC module/bdev/null/bdev_null.o 00:03:51.324 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:51.324 CC module/bdev/error/vbdev_error_rpc.o 00:03:51.324 CC module/bdev/error/vbdev_error.o 00:03:51.324 CC module/bdev/null/bdev_null_rpc.o 00:03:51.324 CC module/bdev/ftl/bdev_ftl.o 00:03:51.324 CC module/bdev/gpt/gpt.o 00:03:51.324 CC module/bdev/delay/vbdev_delay.o 00:03:51.324 CC module/bdev/gpt/vbdev_gpt.o 00:03:51.324 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:51.324 CC module/bdev/aio/bdev_aio.o 00:03:51.324 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:51.324 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.324 CC module/bdev/raid/bdev_raid.o 00:03:51.324 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.324 CC module/bdev/malloc/bdev_malloc.o 00:03:51.324 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:51.324 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.324 CC module/bdev/raid/raid0.o 00:03:51.324 CC module/bdev/raid/raid1.o 00:03:51.324 CC module/bdev/nvme/bdev_nvme.o 00:03:51.324 CC module/bdev/raid/concat.o 00:03:51.324 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:51.324 CC module/bdev/iscsi/bdev_iscsi.o 00:03:51.324 CC module/bdev/nvme/nvme_rpc.o 00:03:51.325 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.325 CC module/bdev/nvme/bdev_mdns_client.o 00:03:51.325 CC module/bdev/nvme/vbdev_opal.o 00:03:51.325 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.325 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:51.325 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:51.325 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:51.325 CC module/bdev/passthru/vbdev_passthru.o 00:03:51.325 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:51.325 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:51.325 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:51.325 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.582 LIB libspdk_blobfs_bdev.a 00:03:51.582 SO libspdk_blobfs_bdev.so.6.0 00:03:51.582 LIB libspdk_bdev_split.a 00:03:51.582 LIB libspdk_bdev_error.a 00:03:51.582 SO libspdk_bdev_split.so.6.0 00:03:51.582 LIB libspdk_bdev_null.a 00:03:51.582 SO libspdk_bdev_error.so.6.0 00:03:51.582 SYMLINK libspdk_blobfs_bdev.so 00:03:51.582 LIB libspdk_bdev_passthru.a 00:03:51.582 LIB libspdk_bdev_ftl.a 00:03:51.582 LIB libspdk_bdev_gpt.a 00:03:51.582 SYMLINK libspdk_bdev_split.so 00:03:51.582 SO libspdk_bdev_null.so.6.0 00:03:51.582 SO libspdk_bdev_passthru.so.6.0 00:03:51.582 SYMLINK libspdk_bdev_error.so 00:03:51.582 SO libspdk_bdev_gpt.so.6.0 00:03:51.582 SO libspdk_bdev_ftl.so.6.0 00:03:51.582 LIB libspdk_bdev_aio.a 00:03:51.841 LIB libspdk_bdev_zone_block.a 00:03:51.841 LIB libspdk_bdev_iscsi.a 00:03:51.841 LIB libspdk_bdev_delay.a 00:03:51.841 LIB libspdk_bdev_malloc.a 00:03:51.841 SYMLINK libspdk_bdev_null.so 00:03:51.841 SO libspdk_bdev_aio.so.6.0 00:03:51.841 SO libspdk_bdev_zone_block.so.6.0 00:03:51.841 SYMLINK libspdk_bdev_passthru.so 00:03:51.841 SYMLINK libspdk_bdev_gpt.so 00:03:51.841 SO libspdk_bdev_delay.so.6.0 00:03:51.841 SO libspdk_bdev_iscsi.so.6.0 00:03:51.841 SYMLINK libspdk_bdev_ftl.so 00:03:51.841 SO libspdk_bdev_malloc.so.6.0 00:03:51.841 LIB libspdk_bdev_lvol.a 00:03:51.842 SYMLINK libspdk_bdev_aio.so 00:03:51.842 SYMLINK libspdk_bdev_zone_block.so 00:03:51.842 SYMLINK libspdk_bdev_delay.so 00:03:51.842 SYMLINK libspdk_bdev_iscsi.so 00:03:51.842 SYMLINK libspdk_bdev_malloc.so 00:03:51.842 SO libspdk_bdev_lvol.so.6.0 00:03:51.842 LIB libspdk_bdev_virtio.a 00:03:51.842 SO libspdk_bdev_virtio.so.6.0 00:03:51.842 SYMLINK libspdk_bdev_lvol.so 00:03:52.101 SYMLINK libspdk_bdev_virtio.so 00:03:52.101 LIB libspdk_bdev_raid.a 00:03:52.101 SO libspdk_bdev_raid.so.6.0 00:03:52.360 SYMLINK libspdk_bdev_raid.so 00:03:53.297 LIB libspdk_bdev_nvme.a 00:03:53.297 SO libspdk_bdev_nvme.so.7.1 00:03:53.297 SYMLINK libspdk_bdev_nvme.so 00:03:54.234 CC module/event/subsystems/iobuf/iobuf.o 00:03:54.234 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:54.234 CC module/event/subsystems/vmd/vmd.o 00:03:54.234 CC module/event/subsystems/sock/sock.o 00:03:54.234 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.234 CC module/event/subsystems/fsdev/fsdev.o 00:03:54.234 CC module/event/subsystems/keyring/keyring.o 00:03:54.234 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.234 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:54.234 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:54.234 LIB libspdk_event_fsdev.a 00:03:54.234 LIB libspdk_event_sock.a 00:03:54.234 LIB libspdk_event_iobuf.a 00:03:54.234 LIB libspdk_event_vfu_tgt.a 00:03:54.234 LIB libspdk_event_keyring.a 00:03:54.234 LIB libspdk_event_scheduler.a 00:03:54.234 LIB libspdk_event_vmd.a 00:03:54.234 LIB libspdk_event_vhost_blk.a 00:03:54.234 SO libspdk_event_sock.so.5.0 00:03:54.234 SO libspdk_event_vfu_tgt.so.3.0 00:03:54.234 SO libspdk_event_fsdev.so.1.0 00:03:54.234 SO libspdk_event_keyring.so.1.0 00:03:54.234 SO libspdk_event_iobuf.so.3.0 00:03:54.234 SO libspdk_event_scheduler.so.4.0 00:03:54.234 SO libspdk_event_vmd.so.6.0 00:03:54.234 SO libspdk_event_vhost_blk.so.3.0 00:03:54.234 SYMLINK libspdk_event_sock.so 00:03:54.234 SYMLINK libspdk_event_vfu_tgt.so 00:03:54.234 SYMLINK libspdk_event_fsdev.so 00:03:54.234 SYMLINK libspdk_event_keyring.so 00:03:54.494 SYMLINK libspdk_event_vmd.so 00:03:54.494 SYMLINK libspdk_event_scheduler.so 00:03:54.494 SYMLINK libspdk_event_iobuf.so 00:03:54.494 SYMLINK libspdk_event_vhost_blk.so 00:03:54.753 CC module/event/subsystems/accel/accel.o 00:03:54.753 LIB libspdk_event_accel.a 00:03:55.013 SO libspdk_event_accel.so.6.0 00:03:55.013 SYMLINK libspdk_event_accel.so 00:03:55.272 CC module/event/subsystems/bdev/bdev.o 00:03:55.531 LIB libspdk_event_bdev.a 00:03:55.531 SO libspdk_event_bdev.so.6.0 00:03:55.531 SYMLINK libspdk_event_bdev.so 00:03:56.099 CC module/event/subsystems/scsi/scsi.o 00:03:56.099 CC module/event/subsystems/ublk/ublk.o 00:03:56.099 CC module/event/subsystems/nbd/nbd.o 00:03:56.099 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:56.099 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.099 LIB libspdk_event_scsi.a 00:03:56.099 LIB libspdk_event_nbd.a 00:03:56.099 LIB libspdk_event_ublk.a 00:03:56.099 SO libspdk_event_scsi.so.6.0 00:03:56.099 SO libspdk_event_nbd.so.6.0 00:03:56.099 SO libspdk_event_ublk.so.3.0 00:03:56.099 LIB libspdk_event_nvmf.a 00:03:56.099 SYMLINK libspdk_event_scsi.so 00:03:56.099 SYMLINK libspdk_event_nbd.so 00:03:56.099 SYMLINK libspdk_event_ublk.so 00:03:56.099 SO libspdk_event_nvmf.so.6.0 00:03:56.358 SYMLINK libspdk_event_nvmf.so 00:03:56.617 CC module/event/subsystems/iscsi/iscsi.o 00:03:56.617 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:56.617 LIB libspdk_event_vhost_scsi.a 00:03:56.617 LIB libspdk_event_iscsi.a 00:03:56.617 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.617 SO libspdk_event_iscsi.so.6.0 00:03:56.876 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.876 SYMLINK libspdk_event_iscsi.so 00:03:56.876 SO libspdk.so.6.0 00:03:56.876 SYMLINK libspdk.so 00:03:57.454 CXX app/trace/trace.o 00:03:57.454 CC app/spdk_nvme_identify/identify.o 00:03:57.454 CC app/trace_record/trace_record.o 00:03:57.454 CC app/spdk_top/spdk_top.o 00:03:57.454 CC test/rpc_client/rpc_client_test.o 00:03:57.454 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.454 CC app/spdk_nvme_perf/perf.o 00:03:57.454 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.454 TEST_HEADER include/spdk/accel.h 00:03:57.454 TEST_HEADER include/spdk/accel_module.h 00:03:57.454 TEST_HEADER include/spdk/assert.h 00:03:57.454 TEST_HEADER include/spdk/barrier.h 00:03:57.454 CC app/spdk_lspci/spdk_lspci.o 00:03:57.454 TEST_HEADER include/spdk/base64.h 00:03:57.454 TEST_HEADER include/spdk/bdev.h 00:03:57.454 TEST_HEADER include/spdk/bdev_module.h 00:03:57.454 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.454 TEST_HEADER include/spdk/bit_array.h 00:03:57.454 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.454 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.454 TEST_HEADER include/spdk/bit_pool.h 00:03:57.454 TEST_HEADER include/spdk/blob.h 00:03:57.454 TEST_HEADER include/spdk/conf.h 00:03:57.454 TEST_HEADER include/spdk/blobfs.h 00:03:57.454 TEST_HEADER include/spdk/config.h 00:03:57.454 TEST_HEADER include/spdk/cpuset.h 00:03:57.454 TEST_HEADER include/spdk/crc32.h 00:03:57.454 TEST_HEADER include/spdk/crc16.h 00:03:57.454 TEST_HEADER include/spdk/crc64.h 00:03:57.454 TEST_HEADER include/spdk/dif.h 00:03:57.454 TEST_HEADER include/spdk/endian.h 00:03:57.454 TEST_HEADER include/spdk/dma.h 00:03:57.454 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.454 TEST_HEADER include/spdk/env.h 00:03:57.454 TEST_HEADER include/spdk/event.h 00:03:57.454 TEST_HEADER include/spdk/fsdev.h 00:03:57.454 TEST_HEADER include/spdk/fd.h 00:03:57.454 TEST_HEADER include/spdk/file.h 00:03:57.454 TEST_HEADER include/spdk/fd_group.h 00:03:57.454 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.454 TEST_HEADER include/spdk/ftl.h 00:03:57.454 TEST_HEADER include/spdk/fsdev_module.h 00:03:57.454 TEST_HEADER include/spdk/hexlify.h 00:03:57.454 TEST_HEADER include/spdk/histogram_data.h 00:03:57.454 TEST_HEADER include/spdk/idxd.h 00:03:57.454 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.454 TEST_HEADER include/spdk/init.h 00:03:57.454 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.454 CC app/spdk_dd/spdk_dd.o 00:03:57.454 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.454 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.454 CC app/iscsi_tgt/iscsi_tgt.o 00:03:57.454 TEST_HEADER include/spdk/ioat.h 00:03:57.454 TEST_HEADER include/spdk/keyring.h 00:03:57.454 TEST_HEADER include/spdk/json.h 00:03:57.454 TEST_HEADER include/spdk/keyring_module.h 00:03:57.454 TEST_HEADER include/spdk/likely.h 00:03:57.454 TEST_HEADER include/spdk/log.h 00:03:57.454 TEST_HEADER include/spdk/mmio.h 00:03:57.454 TEST_HEADER include/spdk/md5.h 00:03:57.454 TEST_HEADER include/spdk/lvol.h 00:03:57.454 TEST_HEADER include/spdk/memory.h 00:03:57.454 TEST_HEADER include/spdk/net.h 00:03:57.454 TEST_HEADER include/spdk/nbd.h 00:03:57.454 CC app/nvmf_tgt/nvmf_main.o 00:03:57.454 TEST_HEADER include/spdk/nvme.h 00:03:57.455 TEST_HEADER include/spdk/notify.h 00:03:57.455 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.455 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.455 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.455 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.455 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.455 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.455 TEST_HEADER include/spdk/nvmf.h 00:03:57.455 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.455 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.455 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.455 TEST_HEADER include/spdk/pci_ids.h 00:03:57.455 TEST_HEADER include/spdk/opal.h 00:03:57.455 TEST_HEADER include/spdk/opal_spec.h 00:03:57.455 TEST_HEADER include/spdk/reduce.h 00:03:57.455 TEST_HEADER include/spdk/pipe.h 00:03:57.455 TEST_HEADER include/spdk/queue.h 00:03:57.455 TEST_HEADER include/spdk/scheduler.h 00:03:57.455 TEST_HEADER include/spdk/rpc.h 00:03:57.455 TEST_HEADER include/spdk/scsi.h 00:03:57.455 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.455 TEST_HEADER include/spdk/stdinc.h 00:03:57.455 TEST_HEADER include/spdk/sock.h 00:03:57.455 TEST_HEADER include/spdk/thread.h 00:03:57.455 TEST_HEADER include/spdk/string.h 00:03:57.455 TEST_HEADER include/spdk/trace.h 00:03:57.455 TEST_HEADER include/spdk/trace_parser.h 00:03:57.455 TEST_HEADER include/spdk/tree.h 00:03:57.455 TEST_HEADER include/spdk/uuid.h 00:03:57.455 TEST_HEADER include/spdk/ublk.h 00:03:57.455 TEST_HEADER include/spdk/util.h 00:03:57.455 TEST_HEADER include/spdk/version.h 00:03:57.455 TEST_HEADER include/spdk/vhost.h 00:03:57.455 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.455 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.455 TEST_HEADER include/spdk/vmd.h 00:03:57.455 TEST_HEADER include/spdk/zipf.h 00:03:57.455 TEST_HEADER include/spdk/xor.h 00:03:57.455 CXX test/cpp_headers/accel.o 00:03:57.455 CXX test/cpp_headers/accel_module.o 00:03:57.455 CXX test/cpp_headers/assert.o 00:03:57.455 CXX test/cpp_headers/barrier.o 00:03:57.455 CC app/spdk_tgt/spdk_tgt.o 00:03:57.455 CXX test/cpp_headers/base64.o 00:03:57.455 CXX test/cpp_headers/bdev.o 00:03:57.455 CXX test/cpp_headers/bdev_zone.o 00:03:57.455 CXX test/cpp_headers/bit_array.o 00:03:57.455 CXX test/cpp_headers/bit_pool.o 00:03:57.455 CXX test/cpp_headers/bdev_module.o 00:03:57.455 CXX test/cpp_headers/blob_bdev.o 00:03:57.455 CXX test/cpp_headers/blobfs.o 00:03:57.455 CXX test/cpp_headers/conf.o 00:03:57.455 CXX test/cpp_headers/blobfs_bdev.o 00:03:57.455 CXX test/cpp_headers/blob.o 00:03:57.455 CXX test/cpp_headers/config.o 00:03:57.455 CXX test/cpp_headers/crc64.o 00:03:57.455 CXX test/cpp_headers/crc32.o 00:03:57.455 CXX test/cpp_headers/crc16.o 00:03:57.455 CXX test/cpp_headers/dif.o 00:03:57.455 CXX test/cpp_headers/cpuset.o 00:03:57.455 CXX test/cpp_headers/dma.o 00:03:57.455 CXX test/cpp_headers/env_dpdk.o 00:03:57.455 CXX test/cpp_headers/env.o 00:03:57.455 CXX test/cpp_headers/fd_group.o 00:03:57.455 CXX test/cpp_headers/event.o 00:03:57.455 CXX test/cpp_headers/fd.o 00:03:57.455 CXX test/cpp_headers/endian.o 00:03:57.455 CXX test/cpp_headers/fsdev.o 00:03:57.455 CXX test/cpp_headers/fsdev_module.o 00:03:57.455 CXX test/cpp_headers/gpt_spec.o 00:03:57.455 CXX test/cpp_headers/ftl.o 00:03:57.455 CXX test/cpp_headers/file.o 00:03:57.455 CXX test/cpp_headers/hexlify.o 00:03:57.455 CXX test/cpp_headers/histogram_data.o 00:03:57.455 CXX test/cpp_headers/idxd.o 00:03:57.455 CXX test/cpp_headers/init.o 00:03:57.455 CXX test/cpp_headers/idxd_spec.o 00:03:57.455 CXX test/cpp_headers/ioat_spec.o 00:03:57.455 CXX test/cpp_headers/ioat.o 00:03:57.455 CXX test/cpp_headers/iscsi_spec.o 00:03:57.455 CXX test/cpp_headers/jsonrpc.o 00:03:57.455 CXX test/cpp_headers/keyring.o 00:03:57.455 CXX test/cpp_headers/likely.o 00:03:57.455 CXX test/cpp_headers/json.o 00:03:57.455 CXX test/cpp_headers/keyring_module.o 00:03:57.455 CXX test/cpp_headers/log.o 00:03:57.455 CXX test/cpp_headers/lvol.o 00:03:57.455 CXX test/cpp_headers/md5.o 00:03:57.455 CXX test/cpp_headers/memory.o 00:03:57.455 CXX test/cpp_headers/net.o 00:03:57.455 CXX test/cpp_headers/mmio.o 00:03:57.455 CXX test/cpp_headers/nbd.o 00:03:57.455 CXX test/cpp_headers/notify.o 00:03:57.455 CXX test/cpp_headers/nvme_intel.o 00:03:57.455 CXX test/cpp_headers/nvme_ocssd.o 00:03:57.455 CXX test/cpp_headers/nvme_spec.o 00:03:57.455 CXX test/cpp_headers/nvme.o 00:03:57.455 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:57.455 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:57.455 CXX test/cpp_headers/nvme_zns.o 00:03:57.455 CXX test/cpp_headers/nvmf_cmd.o 00:03:57.455 CXX test/cpp_headers/nvmf_spec.o 00:03:57.455 CXX test/cpp_headers/nvmf_transport.o 00:03:57.455 CXX test/cpp_headers/opal.o 00:03:57.455 CXX test/cpp_headers/nvmf.o 00:03:57.455 CC examples/ioat/perf/perf.o 00:03:57.455 CXX test/cpp_headers/opal_spec.o 00:03:57.455 CC examples/util/zipf/zipf.o 00:03:57.730 CXX test/cpp_headers/pci_ids.o 00:03:57.730 CC app/fio/nvme/fio_plugin.o 00:03:57.730 CC examples/ioat/verify/verify.o 00:03:57.730 CC test/app/histogram_perf/histogram_perf.o 00:03:57.730 CC test/thread/poller_perf/poller_perf.o 00:03:57.730 CC test/app/jsoncat/jsoncat.o 00:03:57.730 CC test/app/stub/stub.o 00:03:57.730 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.730 CC test/dma/test_dma/test_dma.o 00:03:57.730 CC test/env/pci/pci_ut.o 00:03:57.730 CC test/env/memory/memory_ut.o 00:03:57.730 CC test/env/vtophys/vtophys.o 00:03:57.730 CC test/app/bdev_svc/bdev_svc.o 00:03:57.730 LINK spdk_lspci 00:03:57.730 CC app/fio/bdev/fio_plugin.o 00:03:57.730 LINK rpc_client_test 00:03:57.997 LINK nvmf_tgt 00:03:57.997 LINK iscsi_tgt 00:03:57.997 LINK interrupt_tgt 00:03:57.997 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.997 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.997 LINK spdk_nvme_discover 00:03:58.257 LINK histogram_perf 00:03:58.257 LINK jsoncat 00:03:58.257 LINK poller_perf 00:03:58.257 CXX test/cpp_headers/pipe.o 00:03:58.257 CXX test/cpp_headers/queue.o 00:03:58.257 CXX test/cpp_headers/reduce.o 00:03:58.257 CXX test/cpp_headers/rpc.o 00:03:58.257 CXX test/cpp_headers/scheduler.o 00:03:58.257 CXX test/cpp_headers/scsi.o 00:03:58.257 LINK env_dpdk_post_init 00:03:58.257 CXX test/cpp_headers/scsi_spec.o 00:03:58.257 CXX test/cpp_headers/sock.o 00:03:58.257 CXX test/cpp_headers/stdinc.o 00:03:58.257 CXX test/cpp_headers/string.o 00:03:58.257 CXX test/cpp_headers/thread.o 00:03:58.257 CXX test/cpp_headers/trace_parser.o 00:03:58.257 CXX test/cpp_headers/trace.o 00:03:58.257 CXX test/cpp_headers/tree.o 00:03:58.257 CXX test/cpp_headers/util.o 00:03:58.257 CXX test/cpp_headers/uuid.o 00:03:58.257 LINK spdk_trace_record 00:03:58.257 CXX test/cpp_headers/ublk.o 00:03:58.257 CXX test/cpp_headers/version.o 00:03:58.257 CXX test/cpp_headers/vfio_user_pci.o 00:03:58.257 CXX test/cpp_headers/vfio_user_spec.o 00:03:58.257 CXX test/cpp_headers/vhost.o 00:03:58.257 CXX test/cpp_headers/vmd.o 00:03:58.257 CXX test/cpp_headers/xor.o 00:03:58.257 CXX test/cpp_headers/zipf.o 00:03:58.257 LINK verify 00:03:58.257 LINK spdk_dd 00:03:58.257 LINK zipf 00:03:58.257 LINK spdk_tgt 00:03:58.257 LINK vtophys 00:03:58.257 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:58.257 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:58.257 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.257 LINK bdev_svc 00:03:58.257 LINK stub 00:03:58.515 LINK ioat_perf 00:03:58.515 LINK pci_ut 00:03:58.515 LINK spdk_trace 00:03:58.773 LINK spdk_nvme_perf 00:03:58.773 LINK test_dma 00:03:58.773 LINK spdk_bdev 00:03:58.773 LINK spdk_nvme 00:03:58.773 LINK nvme_fuzz 00:03:58.773 CC test/event/event_perf/event_perf.o 00:03:58.773 CC test/event/reactor_perf/reactor_perf.o 00:03:58.773 CC test/event/reactor/reactor.o 00:03:58.773 CC test/event/app_repeat/app_repeat.o 00:03:58.773 LINK vhost_fuzz 00:03:58.773 CC examples/idxd/perf/perf.o 00:03:58.773 CC test/event/scheduler/scheduler.o 00:03:58.773 LINK mem_callbacks 00:03:58.773 CC examples/vmd/led/led.o 00:03:58.773 LINK spdk_nvme_identify 00:03:58.773 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.773 CC examples/sock/hello_world/hello_sock.o 00:03:59.032 LINK spdk_top 00:03:59.032 CC examples/thread/thread/thread_ex.o 00:03:59.032 LINK event_perf 00:03:59.032 LINK reactor_perf 00:03:59.032 LINK reactor 00:03:59.032 CC app/vhost/vhost.o 00:03:59.032 LINK app_repeat 00:03:59.032 LINK lsvmd 00:03:59.032 LINK led 00:03:59.032 LINK scheduler 00:03:59.032 LINK hello_sock 00:03:59.290 LINK idxd_perf 00:03:59.290 LINK vhost 00:03:59.290 LINK thread 00:03:59.290 CC test/nvme/sgl/sgl.o 00:03:59.290 CC test/nvme/e2edp/nvme_dp.o 00:03:59.290 CC test/nvme/reserve/reserve.o 00:03:59.290 LINK memory_ut 00:03:59.290 CC test/nvme/overhead/overhead.o 00:03:59.290 CC test/nvme/boot_partition/boot_partition.o 00:03:59.290 CC test/nvme/simple_copy/simple_copy.o 00:03:59.290 CC test/nvme/connect_stress/connect_stress.o 00:03:59.290 CC test/nvme/err_injection/err_injection.o 00:03:59.290 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:59.290 CC test/nvme/startup/startup.o 00:03:59.290 CC test/nvme/fdp/fdp.o 00:03:59.290 CC test/nvme/reset/reset.o 00:03:59.290 CC test/nvme/aer/aer.o 00:03:59.290 CC test/nvme/cuse/cuse.o 00:03:59.290 CC test/nvme/fused_ordering/fused_ordering.o 00:03:59.290 CC test/nvme/compliance/nvme_compliance.o 00:03:59.290 CC test/blobfs/mkfs/mkfs.o 00:03:59.290 CC test/accel/dif/dif.o 00:03:59.290 CC test/lvol/esnap/esnap.o 00:03:59.548 LINK boot_partition 00:03:59.548 LINK startup 00:03:59.548 LINK doorbell_aers 00:03:59.548 LINK reserve 00:03:59.548 LINK connect_stress 00:03:59.548 LINK simple_copy 00:03:59.548 LINK fused_ordering 00:03:59.548 LINK err_injection 00:03:59.548 LINK reset 00:03:59.548 LINK mkfs 00:03:59.548 LINK sgl 00:03:59.548 LINK nvme_dp 00:03:59.548 LINK aer 00:03:59.548 LINK overhead 00:03:59.548 LINK fdp 00:03:59.548 LINK nvme_compliance 00:03:59.548 CC examples/nvme/arbitration/arbitration.o 00:03:59.548 CC examples/nvme/hotplug/hotplug.o 00:03:59.548 CC examples/nvme/abort/abort.o 00:03:59.548 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.548 CC examples/nvme/reconnect/reconnect.o 00:03:59.548 CC examples/nvme/hello_world/hello_world.o 00:03:59.548 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:59.548 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:59.807 CC examples/accel/perf/accel_perf.o 00:03:59.807 CC examples/blob/cli/blobcli.o 00:03:59.807 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:59.807 CC examples/blob/hello_world/hello_blob.o 00:03:59.807 LINK pmr_persistence 00:03:59.807 LINK cmb_copy 00:03:59.807 LINK hotplug 00:03:59.807 LINK hello_world 00:03:59.807 LINK iscsi_fuzz 00:03:59.807 LINK dif 00:03:59.807 LINK arbitration 00:04:00.131 LINK reconnect 00:04:00.131 LINK abort 00:04:00.131 LINK hello_blob 00:04:00.131 LINK hello_fsdev 00:04:00.131 LINK nvme_manage 00:04:00.131 LINK accel_perf 00:04:00.131 LINK blobcli 00:04:00.433 LINK cuse 00:04:00.433 CC test/bdev/bdevio/bdevio.o 00:04:00.755 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.755 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.755 LINK bdevio 00:04:01.059 LINK hello_bdev 00:04:01.317 LINK bdevperf 00:04:01.885 CC examples/nvmf/nvmf/nvmf.o 00:04:02.144 LINK nvmf 00:04:03.081 LINK esnap 00:04:03.340 00:04:03.340 real 0m55.370s 00:04:03.340 user 6m50.709s 00:04:03.340 sys 3m2.354s 00:04:03.340 05:54:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:03.340 05:54:23 make -- common/autotest_common.sh@10 -- $ set +x 00:04:03.340 ************************************ 00:04:03.340 END TEST make 00:04:03.340 ************************************ 00:04:03.340 05:54:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:03.340 05:54:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:03.340 05:54:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:03.340 05:54:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.340 05:54:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:03.340 05:54:23 -- pm/common@44 -- $ pid=676156 00:04:03.340 05:54:23 -- pm/common@50 -- $ kill -TERM 676156 00:04:03.340 05:54:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.340 05:54:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:03.340 05:54:23 -- pm/common@44 -- $ pid=676157 00:04:03.340 05:54:23 -- pm/common@50 -- $ kill -TERM 676157 00:04:03.340 05:54:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.340 05:54:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:03.340 05:54:23 -- pm/common@44 -- $ pid=676159 00:04:03.340 05:54:23 -- pm/common@50 -- $ kill -TERM 676159 00:04:03.340 05:54:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.340 05:54:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:03.340 05:54:23 -- pm/common@44 -- $ pid=676184 00:04:03.340 05:54:23 -- pm/common@50 -- $ sudo -E kill -TERM 676184 00:04:03.340 05:54:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:03.340 05:54:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:03.340 05:54:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.340 05:54:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.340 05:54:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.600 05:54:23 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.600 05:54:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.600 05:54:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.600 05:54:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.600 05:54:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.600 05:54:23 -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.600 05:54:23 -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.600 05:54:23 -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.600 05:54:23 -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.600 05:54:23 -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.600 05:54:23 -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.600 05:54:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.600 05:54:23 -- scripts/common.sh@344 -- # case "$op" in 00:04:03.600 05:54:23 -- scripts/common.sh@345 -- # : 1 00:04:03.600 05:54:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.600 05:54:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.600 05:54:23 -- scripts/common.sh@365 -- # decimal 1 00:04:03.600 05:54:23 -- scripts/common.sh@353 -- # local d=1 00:04:03.600 05:54:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.600 05:54:23 -- scripts/common.sh@355 -- # echo 1 00:04:03.600 05:54:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.600 05:54:23 -- scripts/common.sh@366 -- # decimal 2 00:04:03.600 05:54:23 -- scripts/common.sh@353 -- # local d=2 00:04:03.600 05:54:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.600 05:54:23 -- scripts/common.sh@355 -- # echo 2 00:04:03.600 05:54:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.600 05:54:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.600 05:54:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.600 05:54:23 -- scripts/common.sh@368 -- # return 0 00:04:03.600 05:54:23 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.600 05:54:23 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.600 --rc genhtml_branch_coverage=1 00:04:03.600 --rc genhtml_function_coverage=1 00:04:03.600 --rc genhtml_legend=1 00:04:03.600 --rc geninfo_all_blocks=1 00:04:03.600 --rc geninfo_unexecuted_blocks=1 00:04:03.600 00:04:03.600 ' 00:04:03.600 05:54:23 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.600 --rc genhtml_branch_coverage=1 00:04:03.600 --rc genhtml_function_coverage=1 00:04:03.600 --rc genhtml_legend=1 00:04:03.600 --rc geninfo_all_blocks=1 00:04:03.600 --rc geninfo_unexecuted_blocks=1 00:04:03.600 00:04:03.600 ' 00:04:03.600 05:54:23 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.600 --rc genhtml_branch_coverage=1 00:04:03.600 --rc genhtml_function_coverage=1 00:04:03.600 --rc genhtml_legend=1 00:04:03.600 --rc geninfo_all_blocks=1 00:04:03.600 --rc geninfo_unexecuted_blocks=1 00:04:03.600 00:04:03.600 ' 00:04:03.600 05:54:23 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.600 --rc genhtml_branch_coverage=1 00:04:03.600 --rc genhtml_function_coverage=1 00:04:03.600 --rc genhtml_legend=1 00:04:03.600 --rc geninfo_all_blocks=1 00:04:03.600 --rc geninfo_unexecuted_blocks=1 00:04:03.600 00:04:03.600 ' 00:04:03.600 05:54:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.600 05:54:23 -- nvmf/common.sh@7 -- # uname -s 00:04:03.600 05:54:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.600 05:54:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.600 05:54:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.600 05:54:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.600 05:54:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.600 05:54:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.600 05:54:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.600 05:54:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.600 05:54:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.600 05:54:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.600 05:54:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:03.600 05:54:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:03.600 05:54:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.600 05:54:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.600 05:54:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:03.601 05:54:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.601 05:54:23 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.601 05:54:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:03.601 05:54:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.601 05:54:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.601 05:54:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.601 05:54:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.601 05:54:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.601 05:54:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.601 05:54:23 -- paths/export.sh@5 -- # export PATH 00:04:03.601 05:54:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.601 05:54:23 -- nvmf/common.sh@51 -- # : 0 00:04:03.601 05:54:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:03.601 05:54:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:03.601 05:54:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.601 05:54:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.601 05:54:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.601 05:54:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:03.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:03.601 05:54:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:03.601 05:54:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:03.601 05:54:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:03.601 05:54:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:03.601 05:54:23 -- spdk/autotest.sh@32 -- # uname -s 00:04:03.601 05:54:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:03.601 05:54:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:03.601 05:54:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.601 05:54:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:03.601 05:54:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.601 05:54:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:03.601 05:54:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:03.601 05:54:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:03.601 05:54:23 -- spdk/autotest.sh@48 -- # udevadm_pid=756826 00:04:03.601 05:54:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:03.601 05:54:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:03.601 05:54:23 -- pm/common@17 -- # local monitor 00:04:03.601 05:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.601 05:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.601 05:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.601 05:54:23 -- pm/common@21 -- # date +%s 00:04:03.601 05:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.601 05:54:23 -- pm/common@21 -- # date +%s 00:04:03.601 05:54:23 -- pm/common@25 -- # sleep 1 00:04:03.601 05:54:23 -- pm/common@21 -- # date +%s 00:04:03.601 05:54:23 -- pm/common@21 -- # date +%s 00:04:03.601 05:54:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734238463 00:04:03.601 05:54:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734238463 00:04:03.601 05:54:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734238463 00:04:03.601 05:54:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734238463 00:04:03.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734238463_collect-vmstat.pm.log 00:04:03.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734238463_collect-cpu-load.pm.log 00:04:03.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734238463_collect-cpu-temp.pm.log 00:04:03.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734238463_collect-bmc-pm.bmc.pm.log 00:04:04.539 05:54:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:04.539 05:54:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:04.539 05:54:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.539 05:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:04.539 05:54:24 -- spdk/autotest.sh@59 -- # create_test_list 00:04:04.539 05:54:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:04.539 05:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:04.539 05:54:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:04.539 05:54:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.539 05:54:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.539 05:54:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:04.539 05:54:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.539 05:54:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:04.539 05:54:24 -- common/autotest_common.sh@1457 -- # uname 00:04:04.539 05:54:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:04.539 05:54:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:04.539 05:54:24 -- common/autotest_common.sh@1477 -- # uname 00:04:04.539 05:54:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:04.539 05:54:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:04.539 05:54:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:04.798 lcov: LCOV version 1.15 00:04:04.798 05:54:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:22.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:22.887 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:29.452 05:54:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:29.452 05:54:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.452 05:54:49 -- common/autotest_common.sh@10 -- # set +x 00:04:29.452 05:54:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:29.452 05:54:49 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.987 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:31.987 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:31.987 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:32.247 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:32.247 05:54:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:32.247 05:54:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:32.247 05:54:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:32.247 05:54:52 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:32.247 05:54:52 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:32.247 05:54:52 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:32.247 05:54:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:32.247 05:54:52 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:32.247 05:54:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:32.247 05:54:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:32.247 05:54:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:32.247 05:54:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.247 05:54:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:32.247 05:54:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:32.247 05:54:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.247 05:54:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:32.247 05:54:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:32.247 05:54:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:32.247 05:54:52 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:32.247 No valid GPT data, bailing 00:04:32.247 05:54:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:32.247 05:54:52 -- scripts/common.sh@394 -- # pt= 00:04:32.247 05:54:52 -- scripts/common.sh@395 -- # return 1 00:04:32.247 05:54:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:32.247 1+0 records in 00:04:32.247 1+0 records out 00:04:32.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565813 s, 185 MB/s 00:04:32.506 05:54:52 -- spdk/autotest.sh@105 -- # sync 00:04:32.506 05:54:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:32.506 05:54:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:32.506 05:54:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.781 05:54:57 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.781 05:54:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.781 05:54:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.781 05:54:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.316 Hugepages 00:04:40.316 node hugesize free / total 00:04:40.575 node0 1048576kB 0 / 0 00:04:40.575 node0 2048kB 0 / 0 00:04:40.575 node1 1048576kB 0 / 0 00:04:40.575 node1 2048kB 0 / 0 00:04:40.575 00:04:40.575 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.575 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:40.575 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:40.575 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:40.575 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:40.575 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:40.575 05:55:00 -- spdk/autotest.sh@117 -- # uname -s 00:04:40.575 05:55:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:40.575 05:55:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:40.575 05:55:00 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.864 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.864 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.432 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.432 05:55:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:45.811 05:55:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:45.811 05:55:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:45.811 05:55:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.811 05:55:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:45.811 05:55:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:45.811 05:55:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:45.811 05:55:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.811 05:55:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.811 05:55:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:45.811 05:55:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:45.811 05:55:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:45.811 05:55:05 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.346 Waiting for block devices as requested 00:04:48.346 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:48.606 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:48.606 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:48.606 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:48.606 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:48.865 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:48.865 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:48.865 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:49.124 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:49.124 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:49.124 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:49.383 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:49.383 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:49.383 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:49.383 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:49.642 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:49.642 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:49.642 05:55:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:49.642 05:55:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:49.642 05:55:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:49.642 05:55:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:49.642 05:55:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:49.642 05:55:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:49.642 05:55:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:49.642 05:55:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:49.642 05:55:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:49.642 05:55:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:49.642 05:55:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:49.642 05:55:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:49.902 05:55:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:49.902 05:55:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:49.902 05:55:09 -- common/autotest_common.sh@1543 -- # continue 00:04:49.902 05:55:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:49.902 05:55:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.902 05:55:09 -- common/autotest_common.sh@10 -- # set +x 00:04:49.902 05:55:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:49.902 05:55:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.902 05:55:09 -- common/autotest_common.sh@10 -- # set +x 00:04:49.902 05:55:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.194 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.194 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.453 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:53.712 05:55:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:53.712 05:55:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.712 05:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.712 05:55:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:53.712 05:55:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:53.712 05:55:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:53.712 05:55:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:53.712 05:55:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:53.712 05:55:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:53.712 05:55:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:53.712 05:55:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:53.712 05:55:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:53.712 05:55:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:53.712 05:55:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.712 05:55:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.712 05:55:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:53.712 05:55:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:53.712 05:55:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:53.712 05:55:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:53.712 05:55:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:53.712 05:55:13 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:53.712 05:55:13 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:53.712 05:55:13 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:53.712 05:55:13 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:53.712 05:55:13 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:53.712 05:55:13 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:53.712 05:55:13 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=770797 00:04:53.712 05:55:13 -- common/autotest_common.sh@1585 -- # waitforlisten 770797 00:04:53.712 05:55:13 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.712 05:55:13 -- common/autotest_common.sh@835 -- # '[' -z 770797 ']' 00:04:53.712 05:55:13 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.712 05:55:13 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.712 05:55:13 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.712 05:55:13 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.712 05:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.971 [2024-12-15 05:55:13.886262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:53.971 [2024-12-15 05:55:13.886313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770797 ] 00:04:53.972 [2024-12-15 05:55:13.945037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.972 [2024-12-15 05:55:13.967936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.230 05:55:14 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.230 05:55:14 -- common/autotest_common.sh@868 -- # return 0 00:04:54.230 05:55:14 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:54.230 05:55:14 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:54.230 05:55:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:57.523 nvme0n1 00:04:57.523 05:55:17 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:57.523 [2024-12-15 05:55:17.350381] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:57.523 [2024-12-15 05:55:17.350412] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:57.523 request: 00:04:57.523 { 00:04:57.523 "nvme_ctrlr_name": "nvme0", 00:04:57.523 "password": "test", 00:04:57.523 "method": "bdev_nvme_opal_revert", 00:04:57.523 "req_id": 1 00:04:57.523 } 00:04:57.523 Got JSON-RPC error response 00:04:57.523 response: 00:04:57.523 { 00:04:57.523 "code": -32603, 00:04:57.523 "message": "Internal error" 00:04:57.523 } 00:04:57.523 05:55:17 -- common/autotest_common.sh@1591 -- # true 00:04:57.523 05:55:17 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:57.523 05:55:17 -- common/autotest_common.sh@1595 -- # killprocess 770797 00:04:57.523 05:55:17 -- common/autotest_common.sh@954 -- # '[' -z 770797 ']' 00:04:57.523 05:55:17 -- common/autotest_common.sh@958 -- # kill -0 770797 00:04:57.523 05:55:17 -- common/autotest_common.sh@959 -- # uname 00:04:57.523 05:55:17 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.523 05:55:17 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770797 00:04:57.523 05:55:17 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.523 05:55:17 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.523 05:55:17 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770797' 00:04:57.523 killing process with pid 770797 00:04:57.523 05:55:17 -- common/autotest_common.sh@973 -- # kill 770797 00:04:57.523 05:55:17 -- common/autotest_common.sh@978 -- # wait 770797 00:04:58.900 05:55:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.900 05:55:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.900 05:55:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.900 05:55:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.900 05:55:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.900 05:55:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.900 05:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:58.900 05:55:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.900 05:55:19 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.900 05:55:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.900 05:55:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.900 05:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:59.159 ************************************ 00:04:59.159 START TEST env 00:04:59.159 ************************************ 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:59.159 * Looking for test storage... 00:04:59.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.159 05:55:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.159 05:55:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.159 05:55:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.159 05:55:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.159 05:55:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.159 05:55:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.159 05:55:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.159 05:55:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.159 05:55:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.159 05:55:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.159 05:55:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.159 05:55:19 env -- scripts/common.sh@344 -- # case "$op" in 00:04:59.159 05:55:19 env -- scripts/common.sh@345 -- # : 1 00:04:59.159 05:55:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.159 05:55:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.159 05:55:19 env -- scripts/common.sh@365 -- # decimal 1 00:04:59.159 05:55:19 env -- scripts/common.sh@353 -- # local d=1 00:04:59.159 05:55:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.159 05:55:19 env -- scripts/common.sh@355 -- # echo 1 00:04:59.159 05:55:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.159 05:55:19 env -- scripts/common.sh@366 -- # decimal 2 00:04:59.159 05:55:19 env -- scripts/common.sh@353 -- # local d=2 00:04:59.159 05:55:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.159 05:55:19 env -- scripts/common.sh@355 -- # echo 2 00:04:59.159 05:55:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.159 05:55:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.159 05:55:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.159 05:55:19 env -- scripts/common.sh@368 -- # return 0 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.159 --rc genhtml_branch_coverage=1 00:04:59.159 --rc genhtml_function_coverage=1 00:04:59.159 --rc genhtml_legend=1 00:04:59.159 --rc geninfo_all_blocks=1 00:04:59.159 --rc geninfo_unexecuted_blocks=1 00:04:59.159 00:04:59.159 ' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.159 --rc genhtml_branch_coverage=1 00:04:59.159 --rc genhtml_function_coverage=1 00:04:59.159 --rc genhtml_legend=1 00:04:59.159 --rc geninfo_all_blocks=1 00:04:59.159 --rc geninfo_unexecuted_blocks=1 00:04:59.159 00:04:59.159 ' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.159 --rc genhtml_branch_coverage=1 00:04:59.159 --rc genhtml_function_coverage=1 00:04:59.159 --rc genhtml_legend=1 00:04:59.159 --rc geninfo_all_blocks=1 00:04:59.159 --rc geninfo_unexecuted_blocks=1 00:04:59.159 00:04:59.159 ' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.159 --rc genhtml_branch_coverage=1 00:04:59.159 --rc genhtml_function_coverage=1 00:04:59.159 --rc genhtml_legend=1 00:04:59.159 --rc geninfo_all_blocks=1 00:04:59.159 --rc geninfo_unexecuted_blocks=1 00:04:59.159 00:04:59.159 ' 00:04:59.159 05:55:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.159 05:55:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.159 05:55:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.159 ************************************ 00:04:59.159 START TEST env_memory 00:04:59.159 ************************************ 00:04:59.159 05:55:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.159 00:04:59.159 00:04:59.159 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.159 http://cunit.sourceforge.net/ 00:04:59.159 00:04:59.159 00:04:59.159 Suite: memory 00:04:59.419 Test: alloc and free memory map ...[2024-12-15 05:55:19.323810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.419 passed 00:04:59.419 Test: mem map translation ...[2024-12-15 05:55:19.342650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.419 [2024-12-15 05:55:19.342663] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.419 [2024-12-15 05:55:19.342711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.419 [2024-12-15 05:55:19.342718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.419 passed 00:04:59.419 Test: mem map registration ...[2024-12-15 05:55:19.379301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.419 [2024-12-15 05:55:19.379313] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.419 passed 00:04:59.419 Test: mem map adjacent registrations ...passed 00:04:59.419 00:04:59.419 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.419 suites 1 1 n/a 0 0 00:04:59.419 tests 4 4 4 0 0 00:04:59.419 asserts 152 152 152 0 n/a 00:04:59.419 00:04:59.419 Elapsed time = 0.135 seconds 00:04:59.419 00:04:59.419 real 0m0.149s 00:04:59.419 user 0m0.141s 00:04:59.419 sys 0m0.007s 00:04:59.419 05:55:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.419 05:55:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 ************************************ 00:04:59.419 END TEST env_memory 00:04:59.419 ************************************ 00:04:59.419 05:55:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.419 05:55:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.419 05:55:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.419 05:55:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 ************************************ 00:04:59.419 START TEST env_vtophys 00:04:59.419 ************************************ 00:04:59.419 05:55:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.419 EAL: lib.eal log level changed from notice to debug 00:04:59.419 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.419 EAL: Detected lcore 1 as core 1 on socket 0 00:04:59.419 EAL: Detected lcore 2 as core 2 on socket 0 00:04:59.419 EAL: Detected lcore 3 as core 3 on socket 0 00:04:59.419 EAL: Detected lcore 4 as core 4 on socket 0 00:04:59.419 EAL: Detected lcore 5 as core 5 on socket 0 00:04:59.419 EAL: Detected lcore 6 as core 6 on socket 0 00:04:59.419 EAL: Detected lcore 7 as core 8 on socket 0 00:04:59.419 EAL: Detected lcore 8 as core 9 on socket 0 00:04:59.419 EAL: Detected lcore 9 as core 10 on socket 0 00:04:59.419 EAL: Detected lcore 10 as core 11 on socket 0 00:04:59.419 EAL: Detected lcore 11 as core 12 on socket 0 00:04:59.419 EAL: Detected lcore 12 as core 13 on socket 0 00:04:59.419 EAL: Detected lcore 13 as core 16 on socket 0 00:04:59.419 EAL: Detected lcore 14 as core 17 on socket 0 00:04:59.419 EAL: Detected lcore 15 as core 18 on socket 0 00:04:59.419 EAL: Detected lcore 16 as core 19 on socket 0 00:04:59.419 EAL: Detected lcore 17 as core 20 on socket 0 00:04:59.419 EAL: Detected lcore 18 as core 21 on socket 0 00:04:59.419 EAL: Detected lcore 19 as core 25 on socket 0 00:04:59.419 EAL: Detected lcore 20 as core 26 on socket 0 00:04:59.419 EAL: Detected lcore 21 as core 27 on socket 0 00:04:59.419 EAL: Detected lcore 22 as core 28 on socket 0 00:04:59.419 EAL: Detected lcore 23 as core 29 on socket 0 00:04:59.419 EAL: Detected lcore 24 as core 0 on socket 1 00:04:59.419 EAL: Detected lcore 25 as core 1 on socket 1 00:04:59.419 EAL: Detected lcore 26 as core 2 on socket 1 00:04:59.419 EAL: Detected lcore 27 as core 3 on socket 1 00:04:59.419 EAL: Detected lcore 28 as core 4 on socket 1 00:04:59.419 EAL: Detected lcore 29 as core 5 on socket 1 00:04:59.419 EAL: Detected lcore 30 as core 6 on socket 1 00:04:59.419 EAL: Detected lcore 31 as core 8 on socket 1 00:04:59.419 EAL: Detected lcore 32 as core 9 on socket 1 00:04:59.419 EAL: Detected lcore 33 as core 10 on socket 1 00:04:59.419 EAL: Detected lcore 34 as core 11 on socket 1 00:04:59.419 EAL: Detected lcore 35 as core 12 on socket 1 00:04:59.419 EAL: Detected lcore 36 as core 13 on socket 1 00:04:59.419 EAL: Detected lcore 37 as core 16 on socket 1 00:04:59.419 EAL: Detected lcore 38 as core 17 on socket 1 00:04:59.419 EAL: Detected lcore 39 as core 18 on socket 1 00:04:59.419 EAL: Detected lcore 40 as core 19 on socket 1 00:04:59.419 EAL: Detected lcore 41 as core 20 on socket 1 00:04:59.419 EAL: Detected lcore 42 as core 21 on socket 1 00:04:59.419 EAL: Detected lcore 43 as core 25 on socket 1 00:04:59.419 EAL: Detected lcore 44 as core 26 on socket 1 00:04:59.419 EAL: Detected lcore 45 as core 27 on socket 1 00:04:59.419 EAL: Detected lcore 46 as core 28 on socket 1 00:04:59.419 EAL: Detected lcore 47 as core 29 on socket 1 00:04:59.419 EAL: Detected lcore 48 as core 0 on socket 0 00:04:59.419 EAL: Detected lcore 49 as core 1 on socket 0 00:04:59.419 EAL: Detected lcore 50 as core 2 on socket 0 00:04:59.419 EAL: Detected lcore 51 as core 3 on socket 0 00:04:59.419 EAL: Detected lcore 52 as core 4 on socket 0 00:04:59.419 EAL: Detected lcore 53 as core 5 on socket 0 00:04:59.419 EAL: Detected lcore 54 as core 6 on socket 0 00:04:59.419 EAL: Detected lcore 55 as core 8 on socket 0 00:04:59.419 EAL: Detected lcore 56 as core 9 on socket 0 00:04:59.419 EAL: Detected lcore 57 as core 10 on socket 0 00:04:59.419 EAL: Detected lcore 58 as core 11 on socket 0 00:04:59.419 EAL: Detected lcore 59 as core 12 on socket 0 00:04:59.419 EAL: Detected lcore 60 as core 13 on socket 0 00:04:59.419 EAL: Detected lcore 61 as core 16 on socket 0 00:04:59.419 EAL: Detected lcore 62 as core 17 on socket 0 00:04:59.419 EAL: Detected lcore 63 as core 18 on socket 0 00:04:59.419 EAL: Detected lcore 64 as core 19 on socket 0 00:04:59.419 EAL: Detected lcore 65 as core 20 on socket 0 00:04:59.419 EAL: Detected lcore 66 as core 21 on socket 0 00:04:59.419 EAL: Detected lcore 67 as core 25 on socket 0 00:04:59.419 EAL: Detected lcore 68 as core 26 on socket 0 00:04:59.419 EAL: Detected lcore 69 as core 27 on socket 0 00:04:59.419 EAL: Detected lcore 70 as core 28 on socket 0 00:04:59.419 EAL: Detected lcore 71 as core 29 on socket 0 00:04:59.419 EAL: Detected lcore 72 as core 0 on socket 1 00:04:59.419 EAL: Detected lcore 73 as core 1 on socket 1 00:04:59.419 EAL: Detected lcore 74 as core 2 on socket 1 00:04:59.419 EAL: Detected lcore 75 as core 3 on socket 1 00:04:59.419 EAL: Detected lcore 76 as core 4 on socket 1 00:04:59.419 EAL: Detected lcore 77 as core 5 on socket 1 00:04:59.419 EAL: Detected lcore 78 as core 6 on socket 1 00:04:59.419 EAL: Detected lcore 79 as core 8 on socket 1 00:04:59.419 EAL: Detected lcore 80 as core 9 on socket 1 00:04:59.419 EAL: Detected lcore 81 as core 10 on socket 1 00:04:59.419 EAL: Detected lcore 82 as core 11 on socket 1 00:04:59.419 EAL: Detected lcore 83 as core 12 on socket 1 00:04:59.419 EAL: Detected lcore 84 as core 13 on socket 1 00:04:59.419 EAL: Detected lcore 85 as core 16 on socket 1 00:04:59.419 EAL: Detected lcore 86 as core 17 on socket 1 00:04:59.419 EAL: Detected lcore 87 as core 18 on socket 1 00:04:59.419 EAL: Detected lcore 88 as core 19 on socket 1 00:04:59.419 EAL: Detected lcore 89 as core 20 on socket 1 00:04:59.419 EAL: Detected lcore 90 as core 21 on socket 1 00:04:59.419 EAL: Detected lcore 91 as core 25 on socket 1 00:04:59.419 EAL: Detected lcore 92 as core 26 on socket 1 00:04:59.419 EAL: Detected lcore 93 as core 27 on socket 1 00:04:59.419 EAL: Detected lcore 94 as core 28 on socket 1 00:04:59.419 EAL: Detected lcore 95 as core 29 on socket 1 00:04:59.419 EAL: Maximum logical cores by configuration: 128 00:04:59.419 EAL: Detected CPU lcores: 96 00:04:59.419 EAL: Detected NUMA nodes: 2 00:04:59.419 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:59.419 EAL: Detected shared linkage of DPDK 00:04:59.419 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:59.420 EAL: Registered [vdev] bus. 00:04:59.420 EAL: bus.vdev log level changed from disabled to notice 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:59.420 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:59.420 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:59.420 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:59.420 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.420 EAL: No shared files mode enabled, IPC is disabled 00:04:59.420 EAL: Bus pci wants IOVA as 'DC' 00:04:59.420 EAL: Bus vdev wants IOVA as 'DC' 00:04:59.420 EAL: Buses did not request a specific IOVA mode. 00:04:59.420 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:59.420 EAL: Selected IOVA mode 'VA' 00:04:59.420 EAL: Probing VFIO support... 00:04:59.420 EAL: IOMMU type 1 (Type 1) is supported 00:04:59.420 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:59.420 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:59.420 EAL: VFIO support initialized 00:04:59.420 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.420 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.420 EAL: Setting up physically contiguous memory... 00:04:59.420 EAL: Setting maximum number of open files to 524288 00:04:59.420 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.420 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:59.420 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.420 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:59.420 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.420 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:59.420 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.420 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.420 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:59.420 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:59.420 EAL: Hugepages will be freed exactly as allocated. 00:04:59.420 EAL: No shared files mode enabled, IPC is disabled 00:04:59.420 EAL: No shared files mode enabled, IPC is disabled 00:04:59.420 EAL: TSC frequency is ~2100000 KHz 00:04:59.420 EAL: Main lcore 0 is ready (tid=7f220c0aaa00;cpuset=[0]) 00:04:59.420 EAL: Trying to obtain current memory policy. 00:04:59.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.420 EAL: Restoring previous memory policy: 0 00:04:59.420 EAL: request: mp_malloc_sync 00:04:59.420 EAL: No shared files mode enabled, IPC is disabled 00:04:59.420 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.420 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:59.420 EAL: probe driver: 8086:37d2 net_i40e 00:04:59.420 EAL: Not managed by a supported kernel driver, skipped 00:04:59.420 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:59.420 EAL: probe driver: 8086:37d2 net_i40e 00:04:59.420 EAL: Not managed by a supported kernel driver, skipped 00:04:59.420 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.679 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.679 00:04:59.679 00:04:59.679 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.679 http://cunit.sourceforge.net/ 00:04:59.679 00:04:59.679 00:04:59.679 Suite: components_suite 00:04:59.679 Test: vtophys_malloc_test ...passed 00:04:59.679 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.679 EAL: Restoring previous memory policy: 4 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.679 EAL: Trying to obtain current memory policy. 00:04:59.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.679 EAL: Restoring previous memory policy: 4 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.679 EAL: Trying to obtain current memory policy. 00:04:59.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.679 EAL: Restoring previous memory policy: 4 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.679 EAL: request: mp_malloc_sync 00:04:59.679 EAL: No shared files mode enabled, IPC is disabled 00:04:59.679 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.679 EAL: Trying to obtain current memory policy. 00:04:59.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.679 EAL: Restoring previous memory policy: 4 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.680 EAL: Trying to obtain current memory policy. 00:04:59.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.680 EAL: Restoring previous memory policy: 4 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.680 EAL: Trying to obtain current memory policy. 00:04:59.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.680 EAL: Restoring previous memory policy: 4 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.680 EAL: Trying to obtain current memory policy. 00:04:59.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.680 EAL: Restoring previous memory policy: 4 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.680 EAL: Trying to obtain current memory policy. 00:04:59.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.680 EAL: Restoring previous memory policy: 4 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.680 EAL: request: mp_malloc_sync 00:04:59.680 EAL: No shared files mode enabled, IPC is disabled 00:04:59.680 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.680 EAL: Trying to obtain current memory policy. 00:04:59.680 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.938 EAL: Restoring previous memory policy: 4 00:04:59.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.939 EAL: request: mp_malloc_sync 00:04:59.939 EAL: No shared files mode enabled, IPC is disabled 00:04:59.939 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.939 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.198 EAL: request: mp_malloc_sync 00:05:00.198 EAL: No shared files mode enabled, IPC is disabled 00:05:00.198 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.198 EAL: Trying to obtain current memory policy. 00:05:00.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.198 EAL: Restoring previous memory policy: 4 00:05:00.198 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.198 EAL: request: mp_malloc_sync 00:05:00.198 EAL: No shared files mode enabled, IPC is disabled 00:05:00.198 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.457 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.457 EAL: request: mp_malloc_sync 00:05:00.457 EAL: No shared files mode enabled, IPC is disabled 00:05:00.457 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.457 passed 00:05:00.457 00:05:00.457 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.457 suites 1 1 n/a 0 0 00:05:00.457 tests 2 2 2 0 0 00:05:00.457 asserts 497 497 497 0 n/a 00:05:00.457 00:05:00.457 Elapsed time = 0.974 seconds 00:05:00.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.716 EAL: request: mp_malloc_sync 00:05:00.716 EAL: No shared files mode enabled, IPC is disabled 00:05:00.716 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.716 EAL: No shared files mode enabled, IPC is disabled 00:05:00.716 EAL: No shared files mode enabled, IPC is disabled 00:05:00.716 EAL: No shared files mode enabled, IPC is disabled 00:05:00.716 00:05:00.716 real 0m1.103s 00:05:00.716 user 0m0.649s 00:05:00.716 sys 0m0.427s 00:05:00.716 05:55:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.716 05:55:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 END TEST env_vtophys 00:05:00.716 ************************************ 00:05:00.716 05:55:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:00.716 05:55:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.716 05:55:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.716 05:55:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 START TEST env_pci 00:05:00.716 ************************************ 00:05:00.716 05:55:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:00.716 00:05:00.716 00:05:00.716 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.716 http://cunit.sourceforge.net/ 00:05:00.716 00:05:00.716 00:05:00.716 Suite: pci 00:05:00.716 Test: pci_hook ...[2024-12-15 05:55:20.684589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 772040 has claimed it 00:05:00.716 EAL: Cannot find device (10000:00:01.0) 00:05:00.716 EAL: Failed to attach device on primary process 00:05:00.716 passed 00:05:00.716 00:05:00.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.716 suites 1 1 n/a 0 0 00:05:00.716 tests 1 1 1 0 0 00:05:00.716 asserts 25 25 25 0 n/a 00:05:00.716 00:05:00.716 Elapsed time = 0.026 seconds 00:05:00.716 00:05:00.716 real 0m0.046s 00:05:00.716 user 0m0.011s 00:05:00.716 sys 0m0.034s 00:05:00.716 05:55:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.716 05:55:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 END TEST env_pci 00:05:00.716 ************************************ 00:05:00.716 05:55:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.716 05:55:20 env -- env/env.sh@15 -- # uname 00:05:00.716 05:55:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.716 05:55:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.716 05:55:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.716 05:55:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:00.716 05:55:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.716 05:55:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 START TEST env_dpdk_post_init 00:05:00.716 ************************************ 00:05:00.716 05:55:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.716 EAL: Detected CPU lcores: 96 00:05:00.716 EAL: Detected NUMA nodes: 2 00:05:00.716 EAL: Detected shared linkage of DPDK 00:05:00.716 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.716 EAL: Selected IOVA mode 'VA' 00:05:00.716 EAL: VFIO support initialized 00:05:00.716 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.976 EAL: Using IOMMU type 1 (Type 1) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:00.976 EAL: Ignore mapping IO port bar(1) 00:05:00.976 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:01.912 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:01.912 EAL: Ignore mapping IO port bar(1) 00:05:01.912 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:05.198 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:05.198 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:05.198 Starting DPDK initialization... 00:05:05.198 Starting SPDK post initialization... 00:05:05.198 SPDK NVMe probe 00:05:05.198 Attaching to 0000:5e:00.0 00:05:05.198 Attached to 0000:5e:00.0 00:05:05.198 Cleaning up... 00:05:05.198 00:05:05.198 real 0m4.325s 00:05:05.198 user 0m3.220s 00:05:05.198 sys 0m0.175s 00:05:05.198 05:55:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.198 05:55:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 END TEST env_dpdk_post_init 00:05:05.198 ************************************ 00:05:05.198 05:55:25 env -- env/env.sh@26 -- # uname 00:05:05.198 05:55:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.198 05:55:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.198 05:55:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.198 05:55:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.198 05:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 START TEST env_mem_callbacks 00:05:05.198 ************************************ 00:05:05.198 05:55:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.198 EAL: Detected CPU lcores: 96 00:05:05.198 EAL: Detected NUMA nodes: 2 00:05:05.198 EAL: Detected shared linkage of DPDK 00:05:05.198 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.198 EAL: Selected IOVA mode 'VA' 00:05:05.198 EAL: VFIO support initialized 00:05:05.198 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.198 00:05:05.198 00:05:05.198 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.198 http://cunit.sourceforge.net/ 00:05:05.198 00:05:05.198 00:05:05.198 Suite: memory 00:05:05.198 Test: test ... 00:05:05.198 register 0x200000200000 2097152 00:05:05.198 malloc 3145728 00:05:05.198 register 0x200000400000 4194304 00:05:05.198 buf 0x200000500000 len 3145728 PASSED 00:05:05.198 malloc 64 00:05:05.198 buf 0x2000004fff40 len 64 PASSED 00:05:05.198 malloc 4194304 00:05:05.198 register 0x200000800000 6291456 00:05:05.198 buf 0x200000a00000 len 4194304 PASSED 00:05:05.198 free 0x200000500000 3145728 00:05:05.198 free 0x2000004fff40 64 00:05:05.198 unregister 0x200000400000 4194304 PASSED 00:05:05.198 free 0x200000a00000 4194304 00:05:05.198 unregister 0x200000800000 6291456 PASSED 00:05:05.198 malloc 8388608 00:05:05.198 register 0x200000400000 10485760 00:05:05.198 buf 0x200000600000 len 8388608 PASSED 00:05:05.198 free 0x200000600000 8388608 00:05:05.198 unregister 0x200000400000 10485760 PASSED 00:05:05.198 passed 00:05:05.198 00:05:05.198 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.198 suites 1 1 n/a 0 0 00:05:05.198 tests 1 1 1 0 0 00:05:05.198 asserts 15 15 15 0 n/a 00:05:05.198 00:05:05.198 Elapsed time = 0.008 seconds 00:05:05.198 00:05:05.198 real 0m0.063s 00:05:05.198 user 0m0.016s 00:05:05.198 sys 0m0.046s 00:05:05.198 05:55:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.198 05:55:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 END TEST env_mem_callbacks 00:05:05.198 ************************************ 00:05:05.198 00:05:05.198 real 0m6.220s 00:05:05.198 user 0m4.301s 00:05:05.198 sys 0m0.995s 00:05:05.198 05:55:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.198 05:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.198 ************************************ 00:05:05.198 END TEST env 00:05:05.198 ************************************ 00:05:05.198 05:55:25 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.198 05:55:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.198 05:55:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.198 05:55:25 -- common/autotest_common.sh@10 -- # set +x 00:05:05.457 ************************************ 00:05:05.457 START TEST rpc 00:05:05.457 ************************************ 00:05:05.457 05:55:25 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.457 * Looking for test storage... 00:05:05.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.457 05:55:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.457 05:55:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.457 05:55:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.457 05:55:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.457 05:55:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.457 05:55:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.457 05:55:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.457 05:55:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.457 05:55:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.457 05:55:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.457 05:55:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.457 05:55:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.457 05:55:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.457 05:55:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.457 05:55:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.457 05:55:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.457 05:55:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.458 05:55:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.458 05:55:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.458 --rc genhtml_branch_coverage=1 00:05:05.458 --rc genhtml_function_coverage=1 00:05:05.458 --rc genhtml_legend=1 00:05:05.458 --rc geninfo_all_blocks=1 00:05:05.458 --rc geninfo_unexecuted_blocks=1 00:05:05.458 00:05:05.458 ' 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.458 --rc genhtml_branch_coverage=1 00:05:05.458 --rc genhtml_function_coverage=1 00:05:05.458 --rc genhtml_legend=1 00:05:05.458 --rc geninfo_all_blocks=1 00:05:05.458 --rc geninfo_unexecuted_blocks=1 00:05:05.458 00:05:05.458 ' 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.458 --rc genhtml_branch_coverage=1 00:05:05.458 --rc genhtml_function_coverage=1 00:05:05.458 --rc genhtml_legend=1 00:05:05.458 --rc geninfo_all_blocks=1 00:05:05.458 --rc geninfo_unexecuted_blocks=1 00:05:05.458 00:05:05.458 ' 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.458 --rc genhtml_branch_coverage=1 00:05:05.458 --rc genhtml_function_coverage=1 00:05:05.458 --rc genhtml_legend=1 00:05:05.458 --rc geninfo_all_blocks=1 00:05:05.458 --rc geninfo_unexecuted_blocks=1 00:05:05.458 00:05:05.458 ' 00:05:05.458 05:55:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772887 00:05:05.458 05:55:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.458 05:55:25 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:05.458 05:55:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772887 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 772887 ']' 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.458 05:55:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.458 [2024-12-15 05:55:25.590792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:05.458 [2024-12-15 05:55:25.590838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772887 ] 00:05:05.718 [2024-12-15 05:55:25.663208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.718 [2024-12-15 05:55:25.685632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.718 [2024-12-15 05:55:25.685670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772887' to capture a snapshot of events at runtime. 00:05:05.718 [2024-12-15 05:55:25.685677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.718 [2024-12-15 05:55:25.685684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.718 [2024-12-15 05:55:25.685689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772887 for offline analysis/debug. 00:05:05.718 [2024-12-15 05:55:25.686213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.977 05:55:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.977 05:55:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.977 05:55:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.977 05:55:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.977 05:55:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:05.977 05:55:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:05.977 05:55:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.977 05:55:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.977 05:55:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 ************************************ 00:05:05.977 START TEST rpc_integrity 00:05:05.977 ************************************ 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 05:55:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.977 { 00:05:05.977 "name": "Malloc0", 00:05:05.977 "aliases": [ 00:05:05.977 "cbde175d-c596-4d5b-bd4b-2e50dd7079fd" 00:05:05.977 ], 00:05:05.977 "product_name": "Malloc disk", 00:05:05.977 "block_size": 512, 00:05:05.977 "num_blocks": 16384, 00:05:05.977 "uuid": "cbde175d-c596-4d5b-bd4b-2e50dd7079fd", 00:05:05.977 "assigned_rate_limits": { 00:05:05.977 "rw_ios_per_sec": 0, 00:05:05.977 "rw_mbytes_per_sec": 0, 00:05:05.977 "r_mbytes_per_sec": 0, 00:05:05.977 "w_mbytes_per_sec": 0 00:05:05.977 }, 00:05:05.977 "claimed": false, 00:05:05.977 "zoned": false, 00:05:05.977 "supported_io_types": { 00:05:05.977 "read": true, 00:05:05.977 "write": true, 00:05:05.977 "unmap": true, 00:05:05.977 "flush": true, 00:05:05.977 "reset": true, 00:05:05.977 "nvme_admin": false, 00:05:05.977 "nvme_io": false, 00:05:05.977 "nvme_io_md": false, 00:05:05.977 "write_zeroes": true, 00:05:05.977 "zcopy": true, 00:05:05.977 "get_zone_info": false, 00:05:05.977 "zone_management": false, 00:05:05.977 "zone_append": false, 00:05:05.977 "compare": false, 00:05:05.977 "compare_and_write": false, 00:05:05.977 "abort": true, 00:05:05.977 "seek_hole": false, 00:05:05.977 "seek_data": false, 00:05:05.977 "copy": true, 00:05:05.977 "nvme_iov_md": false 00:05:05.977 }, 00:05:05.977 "memory_domains": [ 00:05:05.977 { 00:05:05.977 "dma_device_id": "system", 00:05:05.977 "dma_device_type": 1 00:05:05.977 }, 00:05:05.977 { 00:05:05.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.977 "dma_device_type": 2 00:05:05.977 } 00:05:05.977 ], 00:05:05.977 "driver_specific": {} 00:05:05.977 } 00:05:05.977 ]' 00:05:05.977 05:55:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.977 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.977 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 [2024-12-15 05:55:26.043704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:05.977 [2024-12-15 05:55:26.043733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.977 [2024-12-15 05:55:26.043746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1249ae0 00:05:05.977 [2024-12-15 05:55:26.043753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.977 [2024-12-15 05:55:26.044809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.977 [2024-12-15 05:55:26.044831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.977 Passthru0 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.977 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.977 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.978 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.978 { 00:05:05.978 "name": "Malloc0", 00:05:05.978 "aliases": [ 00:05:05.978 "cbde175d-c596-4d5b-bd4b-2e50dd7079fd" 00:05:05.978 ], 00:05:05.978 "product_name": "Malloc disk", 00:05:05.978 "block_size": 512, 00:05:05.978 "num_blocks": 16384, 00:05:05.978 "uuid": "cbde175d-c596-4d5b-bd4b-2e50dd7079fd", 00:05:05.978 "assigned_rate_limits": { 00:05:05.978 "rw_ios_per_sec": 0, 00:05:05.978 "rw_mbytes_per_sec": 0, 00:05:05.978 "r_mbytes_per_sec": 0, 00:05:05.978 "w_mbytes_per_sec": 0 00:05:05.978 }, 00:05:05.978 "claimed": true, 00:05:05.978 "claim_type": "exclusive_write", 00:05:05.978 "zoned": false, 00:05:05.978 "supported_io_types": { 00:05:05.978 "read": true, 00:05:05.978 "write": true, 00:05:05.978 "unmap": true, 00:05:05.978 "flush": true, 00:05:05.978 "reset": true, 00:05:05.978 "nvme_admin": false, 00:05:05.978 "nvme_io": false, 00:05:05.978 "nvme_io_md": false, 00:05:05.978 "write_zeroes": true, 00:05:05.978 "zcopy": true, 00:05:05.978 "get_zone_info": false, 00:05:05.978 "zone_management": false, 00:05:05.978 "zone_append": false, 00:05:05.978 "compare": false, 00:05:05.978 "compare_and_write": false, 00:05:05.978 "abort": true, 00:05:05.978 "seek_hole": false, 00:05:05.978 "seek_data": false, 00:05:05.978 "copy": true, 00:05:05.978 "nvme_iov_md": false 00:05:05.978 }, 00:05:05.978 "memory_domains": [ 00:05:05.978 { 00:05:05.978 "dma_device_id": "system", 00:05:05.978 "dma_device_type": 1 00:05:05.978 }, 00:05:05.978 { 00:05:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.978 "dma_device_type": 2 00:05:05.978 } 00:05:05.978 ], 00:05:05.978 "driver_specific": {} 00:05:05.978 }, 00:05:05.978 { 00:05:05.978 "name": "Passthru0", 00:05:05.978 "aliases": [ 00:05:05.978 "a6ab0b5d-a08e-5893-a0ef-b5cff08d71a9" 00:05:05.978 ], 00:05:05.978 "product_name": "passthru", 00:05:05.978 "block_size": 512, 00:05:05.978 "num_blocks": 16384, 00:05:05.978 "uuid": "a6ab0b5d-a08e-5893-a0ef-b5cff08d71a9", 00:05:05.978 "assigned_rate_limits": { 00:05:05.978 "rw_ios_per_sec": 0, 00:05:05.978 "rw_mbytes_per_sec": 0, 00:05:05.978 "r_mbytes_per_sec": 0, 00:05:05.978 "w_mbytes_per_sec": 0 00:05:05.978 }, 00:05:05.978 "claimed": false, 00:05:05.978 "zoned": false, 00:05:05.978 "supported_io_types": { 00:05:05.978 "read": true, 00:05:05.978 "write": true, 00:05:05.978 "unmap": true, 00:05:05.978 "flush": true, 00:05:05.978 "reset": true, 00:05:05.978 "nvme_admin": false, 00:05:05.978 "nvme_io": false, 00:05:05.978 "nvme_io_md": false, 00:05:05.978 "write_zeroes": true, 00:05:05.978 "zcopy": true, 00:05:05.978 "get_zone_info": false, 00:05:05.978 "zone_management": false, 00:05:05.978 "zone_append": false, 00:05:05.978 "compare": false, 00:05:05.978 "compare_and_write": false, 00:05:05.978 "abort": true, 00:05:05.978 "seek_hole": false, 00:05:05.978 "seek_data": false, 00:05:05.978 "copy": true, 00:05:05.978 "nvme_iov_md": false 00:05:05.978 }, 00:05:05.978 "memory_domains": [ 00:05:05.978 { 00:05:05.978 "dma_device_id": "system", 00:05:05.978 "dma_device_type": 1 00:05:05.978 }, 00:05:05.978 { 00:05:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.978 "dma_device_type": 2 00:05:05.978 } 00:05:05.978 ], 00:05:05.978 "driver_specific": { 00:05:05.978 "passthru": { 00:05:05.978 "name": "Passthru0", 00:05:05.978 "base_bdev_name": "Malloc0" 00:05:05.978 } 00:05:05.978 } 00:05:05.978 } 00:05:05.978 ]' 00:05:05.978 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:06.237 05:55:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.237 00:05:06.237 real 0m0.274s 00:05:06.237 user 0m0.165s 00:05:06.237 sys 0m0.045s 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 ************************************ 00:05:06.237 END TEST rpc_integrity 00:05:06.237 ************************************ 00:05:06.237 05:55:26 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:06.237 05:55:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.237 05:55:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.237 05:55:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 ************************************ 00:05:06.237 START TEST rpc_plugins 00:05:06.237 ************************************ 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:06.237 { 00:05:06.237 "name": "Malloc1", 00:05:06.237 "aliases": [ 00:05:06.237 "12d7faab-b82c-4f34-bf30-bbc7b47ca945" 00:05:06.237 ], 00:05:06.237 "product_name": "Malloc disk", 00:05:06.237 "block_size": 4096, 00:05:06.237 "num_blocks": 256, 00:05:06.237 "uuid": "12d7faab-b82c-4f34-bf30-bbc7b47ca945", 00:05:06.237 "assigned_rate_limits": { 00:05:06.237 "rw_ios_per_sec": 0, 00:05:06.237 "rw_mbytes_per_sec": 0, 00:05:06.237 "r_mbytes_per_sec": 0, 00:05:06.237 "w_mbytes_per_sec": 0 00:05:06.237 }, 00:05:06.237 "claimed": false, 00:05:06.237 "zoned": false, 00:05:06.237 "supported_io_types": { 00:05:06.237 "read": true, 00:05:06.237 "write": true, 00:05:06.237 "unmap": true, 00:05:06.237 "flush": true, 00:05:06.237 "reset": true, 00:05:06.237 "nvme_admin": false, 00:05:06.237 "nvme_io": false, 00:05:06.237 "nvme_io_md": false, 00:05:06.237 "write_zeroes": true, 00:05:06.237 "zcopy": true, 00:05:06.237 "get_zone_info": false, 00:05:06.237 "zone_management": false, 00:05:06.237 "zone_append": false, 00:05:06.237 "compare": false, 00:05:06.237 "compare_and_write": false, 00:05:06.237 "abort": true, 00:05:06.237 "seek_hole": false, 00:05:06.237 "seek_data": false, 00:05:06.237 "copy": true, 00:05:06.237 "nvme_iov_md": false 00:05:06.237 }, 00:05:06.237 "memory_domains": [ 00:05:06.237 { 00:05:06.237 "dma_device_id": "system", 00:05:06.237 "dma_device_type": 1 00:05:06.237 }, 00:05:06.237 { 00:05:06.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.237 "dma_device_type": 2 00:05:06.237 } 00:05:06.237 ], 00:05:06.237 "driver_specific": {} 00:05:06.237 } 00:05:06.237 ]' 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.237 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.237 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:06.238 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:06.497 05:55:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:06.497 00:05:06.497 real 0m0.145s 00:05:06.497 user 0m0.089s 00:05:06.497 sys 0m0.020s 00:05:06.497 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.497 05:55:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.497 ************************************ 00:05:06.497 END TEST rpc_plugins 00:05:06.497 ************************************ 00:05:06.497 05:55:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:06.497 05:55:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.497 05:55:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.497 05:55:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.497 ************************************ 00:05:06.497 START TEST rpc_trace_cmd_test 00:05:06.497 ************************************ 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:06.497 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772887", 00:05:06.497 "tpoint_group_mask": "0x8", 00:05:06.497 "iscsi_conn": { 00:05:06.497 "mask": "0x2", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "scsi": { 00:05:06.497 "mask": "0x4", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "bdev": { 00:05:06.497 "mask": "0x8", 00:05:06.497 "tpoint_mask": "0xffffffffffffffff" 00:05:06.497 }, 00:05:06.497 "nvmf_rdma": { 00:05:06.497 "mask": "0x10", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "nvmf_tcp": { 00:05:06.497 "mask": "0x20", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "ftl": { 00:05:06.497 "mask": "0x40", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "blobfs": { 00:05:06.497 "mask": "0x80", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "dsa": { 00:05:06.497 "mask": "0x200", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "thread": { 00:05:06.497 "mask": "0x400", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "nvme_pcie": { 00:05:06.497 "mask": "0x800", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "iaa": { 00:05:06.497 "mask": "0x1000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "nvme_tcp": { 00:05:06.497 "mask": "0x2000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "bdev_nvme": { 00:05:06.497 "mask": "0x4000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "sock": { 00:05:06.497 "mask": "0x8000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "blob": { 00:05:06.497 "mask": "0x10000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "bdev_raid": { 00:05:06.497 "mask": "0x20000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 }, 00:05:06.497 "scheduler": { 00:05:06.497 "mask": "0x40000", 00:05:06.497 "tpoint_mask": "0x0" 00:05:06.497 } 00:05:06.497 }' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:06.497 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:06.756 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:06.756 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:06.756 05:55:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:06.756 00:05:06.756 real 0m0.229s 00:05:06.756 user 0m0.195s 00:05:06.756 sys 0m0.025s 00:05:06.756 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.756 05:55:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 END TEST rpc_trace_cmd_test 00:05:06.756 ************************************ 00:05:06.756 05:55:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:06.756 05:55:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:06.756 05:55:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:06.756 05:55:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.756 05:55:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.756 05:55:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 ************************************ 00:05:06.756 START TEST rpc_daemon_integrity 00:05:06.756 ************************************ 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.756 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.756 { 00:05:06.756 "name": "Malloc2", 00:05:06.756 "aliases": [ 00:05:06.756 "f68541bc-9faa-4bbd-a84f-87c3d0b7b386" 00:05:06.756 ], 00:05:06.756 "product_name": "Malloc disk", 00:05:06.756 "block_size": 512, 00:05:06.757 "num_blocks": 16384, 00:05:06.757 "uuid": "f68541bc-9faa-4bbd-a84f-87c3d0b7b386", 00:05:06.757 "assigned_rate_limits": { 00:05:06.757 "rw_ios_per_sec": 0, 00:05:06.757 "rw_mbytes_per_sec": 0, 00:05:06.757 "r_mbytes_per_sec": 0, 00:05:06.757 "w_mbytes_per_sec": 0 00:05:06.757 }, 00:05:06.757 "claimed": false, 00:05:06.757 "zoned": false, 00:05:06.757 "supported_io_types": { 00:05:06.757 "read": true, 00:05:06.757 "write": true, 00:05:06.757 "unmap": true, 00:05:06.757 "flush": true, 00:05:06.757 "reset": true, 00:05:06.757 "nvme_admin": false, 00:05:06.757 "nvme_io": false, 00:05:06.757 "nvme_io_md": false, 00:05:06.757 "write_zeroes": true, 00:05:06.757 "zcopy": true, 00:05:06.757 "get_zone_info": false, 00:05:06.757 "zone_management": false, 00:05:06.757 "zone_append": false, 00:05:06.757 "compare": false, 00:05:06.757 "compare_and_write": false, 00:05:06.757 "abort": true, 00:05:06.757 "seek_hole": false, 00:05:06.757 "seek_data": false, 00:05:06.757 "copy": true, 00:05:06.757 "nvme_iov_md": false 00:05:06.757 }, 00:05:06.757 "memory_domains": [ 00:05:06.757 { 00:05:06.757 "dma_device_id": "system", 00:05:06.757 "dma_device_type": 1 00:05:06.757 }, 00:05:06.757 { 00:05:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.757 "dma_device_type": 2 00:05:06.757 } 00:05:06.757 ], 00:05:06.757 "driver_specific": {} 00:05:06.757 } 00:05:06.757 ]' 00:05:06.757 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.757 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.757 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:06.757 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.757 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.757 [2024-12-15 05:55:26.893984] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:06.757 [2024-12-15 05:55:26.894016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.757 [2024-12-15 05:55:26.894032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1107f80 00:05:06.757 [2024-12-15 05:55:26.894039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.016 [2024-12-15 05:55:26.895005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.016 [2024-12-15 05:55:26.895028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.016 Passthru0 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.016 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.016 { 00:05:07.016 "name": "Malloc2", 00:05:07.016 "aliases": [ 00:05:07.016 "f68541bc-9faa-4bbd-a84f-87c3d0b7b386" 00:05:07.016 ], 00:05:07.016 "product_name": "Malloc disk", 00:05:07.016 "block_size": 512, 00:05:07.016 "num_blocks": 16384, 00:05:07.016 "uuid": "f68541bc-9faa-4bbd-a84f-87c3d0b7b386", 00:05:07.016 "assigned_rate_limits": { 00:05:07.016 "rw_ios_per_sec": 0, 00:05:07.016 "rw_mbytes_per_sec": 0, 00:05:07.016 "r_mbytes_per_sec": 0, 00:05:07.016 "w_mbytes_per_sec": 0 00:05:07.016 }, 00:05:07.016 "claimed": true, 00:05:07.016 "claim_type": "exclusive_write", 00:05:07.016 "zoned": false, 00:05:07.016 "supported_io_types": { 00:05:07.016 "read": true, 00:05:07.016 "write": true, 00:05:07.016 "unmap": true, 00:05:07.016 "flush": true, 00:05:07.016 "reset": true, 00:05:07.016 "nvme_admin": false, 00:05:07.016 "nvme_io": false, 00:05:07.016 "nvme_io_md": false, 00:05:07.016 "write_zeroes": true, 00:05:07.016 "zcopy": true, 00:05:07.016 "get_zone_info": false, 00:05:07.016 "zone_management": false, 00:05:07.016 "zone_append": false, 00:05:07.016 "compare": false, 00:05:07.016 "compare_and_write": false, 00:05:07.016 "abort": true, 00:05:07.016 "seek_hole": false, 00:05:07.016 "seek_data": false, 00:05:07.016 "copy": true, 00:05:07.016 "nvme_iov_md": false 00:05:07.016 }, 00:05:07.016 "memory_domains": [ 00:05:07.016 { 00:05:07.016 "dma_device_id": "system", 00:05:07.016 "dma_device_type": 1 00:05:07.016 }, 00:05:07.016 { 00:05:07.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.016 "dma_device_type": 2 00:05:07.016 } 00:05:07.016 ], 00:05:07.016 "driver_specific": {} 00:05:07.016 }, 00:05:07.016 { 00:05:07.016 "name": "Passthru0", 00:05:07.016 "aliases": [ 00:05:07.016 "9ced49a1-b69c-5ea5-8c58-35a1a5d00b66" 00:05:07.016 ], 00:05:07.016 "product_name": "passthru", 00:05:07.016 "block_size": 512, 00:05:07.016 "num_blocks": 16384, 00:05:07.016 "uuid": "9ced49a1-b69c-5ea5-8c58-35a1a5d00b66", 00:05:07.016 "assigned_rate_limits": { 00:05:07.016 "rw_ios_per_sec": 0, 00:05:07.016 "rw_mbytes_per_sec": 0, 00:05:07.016 "r_mbytes_per_sec": 0, 00:05:07.016 "w_mbytes_per_sec": 0 00:05:07.016 }, 00:05:07.016 "claimed": false, 00:05:07.016 "zoned": false, 00:05:07.016 "supported_io_types": { 00:05:07.016 "read": true, 00:05:07.016 "write": true, 00:05:07.016 "unmap": true, 00:05:07.016 "flush": true, 00:05:07.016 "reset": true, 00:05:07.016 "nvme_admin": false, 00:05:07.016 "nvme_io": false, 00:05:07.016 "nvme_io_md": false, 00:05:07.016 "write_zeroes": true, 00:05:07.016 "zcopy": true, 00:05:07.016 "get_zone_info": false, 00:05:07.016 "zone_management": false, 00:05:07.016 "zone_append": false, 00:05:07.016 "compare": false, 00:05:07.016 "compare_and_write": false, 00:05:07.016 "abort": true, 00:05:07.016 "seek_hole": false, 00:05:07.016 "seek_data": false, 00:05:07.016 "copy": true, 00:05:07.016 "nvme_iov_md": false 00:05:07.016 }, 00:05:07.016 "memory_domains": [ 00:05:07.016 { 00:05:07.016 "dma_device_id": "system", 00:05:07.016 "dma_device_type": 1 00:05:07.016 }, 00:05:07.017 { 00:05:07.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.017 "dma_device_type": 2 00:05:07.017 } 00:05:07.017 ], 00:05:07.017 "driver_specific": { 00:05:07.017 "passthru": { 00:05:07.017 "name": "Passthru0", 00:05:07.017 "base_bdev_name": "Malloc2" 00:05:07.017 } 00:05:07.017 } 00:05:07.017 } 00:05:07.017 ]' 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.017 05:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.017 05:55:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.017 00:05:07.017 real 0m0.272s 00:05:07.017 user 0m0.169s 00:05:07.017 sys 0m0.037s 00:05:07.017 05:55:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.017 05:55:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.017 ************************************ 00:05:07.017 END TEST rpc_daemon_integrity 00:05:07.017 ************************************ 00:05:07.017 05:55:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:07.017 05:55:27 rpc -- rpc/rpc.sh@84 -- # killprocess 772887 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 772887 ']' 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@958 -- # kill -0 772887 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772887 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772887' 00:05:07.017 killing process with pid 772887 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@973 -- # kill 772887 00:05:07.017 05:55:27 rpc -- common/autotest_common.sh@978 -- # wait 772887 00:05:07.585 00:05:07.585 real 0m2.055s 00:05:07.585 user 0m2.621s 00:05:07.585 sys 0m0.706s 00:05:07.585 05:55:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.585 05:55:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 ************************************ 00:05:07.585 END TEST rpc 00:05:07.585 ************************************ 00:05:07.585 05:55:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.585 05:55:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.585 05:55:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.585 05:55:27 -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 ************************************ 00:05:07.585 START TEST skip_rpc 00:05:07.585 ************************************ 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:07.585 * Looking for test storage... 00:05:07.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.585 05:55:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.585 --rc genhtml_branch_coverage=1 00:05:07.585 --rc genhtml_function_coverage=1 00:05:07.585 --rc genhtml_legend=1 00:05:07.585 --rc geninfo_all_blocks=1 00:05:07.585 --rc geninfo_unexecuted_blocks=1 00:05:07.585 00:05:07.585 ' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.585 --rc genhtml_branch_coverage=1 00:05:07.585 --rc genhtml_function_coverage=1 00:05:07.585 --rc genhtml_legend=1 00:05:07.585 --rc geninfo_all_blocks=1 00:05:07.585 --rc geninfo_unexecuted_blocks=1 00:05:07.585 00:05:07.585 ' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.585 --rc genhtml_branch_coverage=1 00:05:07.585 --rc genhtml_function_coverage=1 00:05:07.585 --rc genhtml_legend=1 00:05:07.585 --rc geninfo_all_blocks=1 00:05:07.585 --rc geninfo_unexecuted_blocks=1 00:05:07.585 00:05:07.585 ' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.585 --rc genhtml_branch_coverage=1 00:05:07.585 --rc genhtml_function_coverage=1 00:05:07.585 --rc genhtml_legend=1 00:05:07.585 --rc geninfo_all_blocks=1 00:05:07.585 --rc geninfo_unexecuted_blocks=1 00:05:07.585 00:05:07.585 ' 00:05:07.585 05:55:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.585 05:55:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:07.585 05:55:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.585 05:55:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 ************************************ 00:05:07.585 START TEST skip_rpc 00:05:07.585 ************************************ 00:05:07.585 05:55:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:07.585 05:55:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=773512 00:05:07.585 05:55:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.585 05:55:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:07.585 05:55:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:07.845 [2024-12-15 05:55:27.748725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:07.845 [2024-12-15 05:55:27.748762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773512 ] 00:05:07.845 [2024-12-15 05:55:27.822432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.845 [2024-12-15 05:55:27.844585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 773512 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 773512 ']' 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 773512 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773512 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773512' 00:05:13.111 killing process with pid 773512 00:05:13.111 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 773512 00:05:13.112 05:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 773512 00:05:13.112 00:05:13.112 real 0m5.358s 00:05:13.112 user 0m5.114s 00:05:13.112 sys 0m0.275s 00:05:13.112 05:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.112 05:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.112 ************************************ 00:05:13.112 END TEST skip_rpc 00:05:13.112 ************************************ 00:05:13.112 05:55:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:13.112 05:55:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.112 05:55:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.112 05:55:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.112 ************************************ 00:05:13.112 START TEST skip_rpc_with_json 00:05:13.112 ************************************ 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=774434 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 774434 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 774434 ']' 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.112 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.112 [2024-12-15 05:55:33.175348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:13.112 [2024-12-15 05:55:33.175389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774434 ] 00:05:13.370 [2024-12-15 05:55:33.251516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.370 [2024-12-15 05:55:33.274388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.370 [2024-12-15 05:55:33.471466] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:13.370 request: 00:05:13.370 { 00:05:13.370 "trtype": "tcp", 00:05:13.370 "method": "nvmf_get_transports", 00:05:13.370 "req_id": 1 00:05:13.370 } 00:05:13.370 Got JSON-RPC error response 00:05:13.370 response: 00:05:13.370 { 00:05:13.370 "code": -19, 00:05:13.370 "message": "No such device" 00:05:13.370 } 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.370 [2024-12-15 05:55:33.483564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.370 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.629 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.629 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.629 { 00:05:13.629 "subsystems": [ 00:05:13.629 { 00:05:13.629 "subsystem": "fsdev", 00:05:13.629 "config": [ 00:05:13.629 { 00:05:13.629 "method": "fsdev_set_opts", 00:05:13.629 "params": { 00:05:13.629 "fsdev_io_pool_size": 65535, 00:05:13.629 "fsdev_io_cache_size": 256 00:05:13.629 } 00:05:13.629 } 00:05:13.629 ] 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "subsystem": "vfio_user_target", 00:05:13.629 "config": null 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "subsystem": "keyring", 00:05:13.629 "config": [] 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "subsystem": "iobuf", 00:05:13.629 "config": [ 00:05:13.629 { 00:05:13.629 "method": "iobuf_set_options", 00:05:13.629 "params": { 00:05:13.629 "small_pool_count": 8192, 00:05:13.629 "large_pool_count": 1024, 00:05:13.629 "small_bufsize": 8192, 00:05:13.629 "large_bufsize": 135168, 00:05:13.629 "enable_numa": false 00:05:13.629 } 00:05:13.629 } 00:05:13.629 ] 00:05:13.629 }, 00:05:13.629 { 00:05:13.629 "subsystem": "sock", 00:05:13.629 "config": [ 00:05:13.629 { 00:05:13.629 "method": "sock_set_default_impl", 00:05:13.629 "params": { 00:05:13.629 "impl_name": "posix" 00:05:13.629 } 00:05:13.629 }, 00:05:13.630 { 00:05:13.630 "method": "sock_impl_set_options", 00:05:13.630 "params": { 00:05:13.630 "impl_name": "ssl", 00:05:13.630 "recv_buf_size": 4096, 00:05:13.630 "send_buf_size": 4096, 00:05:13.630 "enable_recv_pipe": true, 00:05:13.630 "enable_quickack": false, 00:05:13.630 "enable_placement_id": 0, 00:05:13.630 "enable_zerocopy_send_server": true, 00:05:13.630 "enable_zerocopy_send_client": false, 00:05:13.630 "zerocopy_threshold": 0, 00:05:13.630 "tls_version": 0, 00:05:13.630 "enable_ktls": false 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "sock_impl_set_options", 00:05:13.630 "params": { 00:05:13.630 "impl_name": "posix", 00:05:13.630 "recv_buf_size": 2097152, 00:05:13.630 "send_buf_size": 2097152, 00:05:13.630 "enable_recv_pipe": true, 00:05:13.630 "enable_quickack": false, 00:05:13.630 "enable_placement_id": 0, 00:05:13.630 "enable_zerocopy_send_server": true, 00:05:13.630 "enable_zerocopy_send_client": false, 00:05:13.630 "zerocopy_threshold": 0, 00:05:13.630 "tls_version": 0, 00:05:13.630 "enable_ktls": false 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "vmd", 00:05:13.630 "config": [] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "accel", 00:05:13.630 "config": [ 00:05:13.630 { 00:05:13.630 "method": "accel_set_options", 00:05:13.630 "params": { 00:05:13.630 "small_cache_size": 128, 00:05:13.630 "large_cache_size": 16, 00:05:13.630 "task_count": 2048, 00:05:13.630 "sequence_count": 2048, 00:05:13.630 "buf_count": 2048 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "bdev", 00:05:13.630 "config": [ 00:05:13.630 { 00:05:13.630 "method": "bdev_set_options", 00:05:13.630 "params": { 00:05:13.630 "bdev_io_pool_size": 65535, 00:05:13.630 "bdev_io_cache_size": 256, 00:05:13.630 "bdev_auto_examine": true, 00:05:13.630 "iobuf_small_cache_size": 128, 00:05:13.630 "iobuf_large_cache_size": 16 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "bdev_raid_set_options", 00:05:13.630 "params": { 00:05:13.630 "process_window_size_kb": 1024, 00:05:13.630 "process_max_bandwidth_mb_sec": 0 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "bdev_iscsi_set_options", 00:05:13.630 "params": { 00:05:13.630 "timeout_sec": 30 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "bdev_nvme_set_options", 00:05:13.630 "params": { 00:05:13.630 "action_on_timeout": "none", 00:05:13.630 "timeout_us": 0, 00:05:13.630 "timeout_admin_us": 0, 00:05:13.630 "keep_alive_timeout_ms": 10000, 00:05:13.630 "arbitration_burst": 0, 00:05:13.630 "low_priority_weight": 0, 00:05:13.630 "medium_priority_weight": 0, 00:05:13.630 "high_priority_weight": 0, 00:05:13.630 "nvme_adminq_poll_period_us": 10000, 00:05:13.630 "nvme_ioq_poll_period_us": 0, 00:05:13.630 "io_queue_requests": 0, 00:05:13.630 "delay_cmd_submit": true, 00:05:13.630 "transport_retry_count": 4, 00:05:13.630 "bdev_retry_count": 3, 00:05:13.630 "transport_ack_timeout": 0, 00:05:13.630 "ctrlr_loss_timeout_sec": 0, 00:05:13.630 "reconnect_delay_sec": 0, 00:05:13.630 "fast_io_fail_timeout_sec": 0, 00:05:13.630 "disable_auto_failback": false, 00:05:13.630 "generate_uuids": false, 00:05:13.630 "transport_tos": 0, 00:05:13.630 "nvme_error_stat": false, 00:05:13.630 "rdma_srq_size": 0, 00:05:13.630 "io_path_stat": false, 00:05:13.630 "allow_accel_sequence": false, 00:05:13.630 "rdma_max_cq_size": 0, 00:05:13.630 "rdma_cm_event_timeout_ms": 0, 00:05:13.630 "dhchap_digests": [ 00:05:13.630 "sha256", 00:05:13.630 "sha384", 00:05:13.630 "sha512" 00:05:13.630 ], 00:05:13.630 "dhchap_dhgroups": [ 00:05:13.630 "null", 00:05:13.630 "ffdhe2048", 00:05:13.630 "ffdhe3072", 00:05:13.630 "ffdhe4096", 00:05:13.630 "ffdhe6144", 00:05:13.630 "ffdhe8192" 00:05:13.630 ], 00:05:13.630 "rdma_umr_per_io": false 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "bdev_nvme_set_hotplug", 00:05:13.630 "params": { 00:05:13.630 "period_us": 100000, 00:05:13.630 "enable": false 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "bdev_wait_for_examine" 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "scsi", 00:05:13.630 "config": null 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "scheduler", 00:05:13.630 "config": [ 00:05:13.630 { 00:05:13.630 "method": "framework_set_scheduler", 00:05:13.630 "params": { 00:05:13.630 "name": "static" 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "vhost_scsi", 00:05:13.630 "config": [] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "vhost_blk", 00:05:13.630 "config": [] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "ublk", 00:05:13.630 "config": [] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "nbd", 00:05:13.630 "config": [] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "nvmf", 00:05:13.630 "config": [ 00:05:13.630 { 00:05:13.630 "method": "nvmf_set_config", 00:05:13.630 "params": { 00:05:13.630 "discovery_filter": "match_any", 00:05:13.630 "admin_cmd_passthru": { 00:05:13.630 "identify_ctrlr": false 00:05:13.630 }, 00:05:13.630 "dhchap_digests": [ 00:05:13.630 "sha256", 00:05:13.630 "sha384", 00:05:13.630 "sha512" 00:05:13.630 ], 00:05:13.630 "dhchap_dhgroups": [ 00:05:13.630 "null", 00:05:13.630 "ffdhe2048", 00:05:13.630 "ffdhe3072", 00:05:13.630 "ffdhe4096", 00:05:13.630 "ffdhe6144", 00:05:13.630 "ffdhe8192" 00:05:13.630 ] 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "nvmf_set_max_subsystems", 00:05:13.630 "params": { 00:05:13.630 "max_subsystems": 1024 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "nvmf_set_crdt", 00:05:13.630 "params": { 00:05:13.630 "crdt1": 0, 00:05:13.630 "crdt2": 0, 00:05:13.630 "crdt3": 0 00:05:13.630 } 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "method": "nvmf_create_transport", 00:05:13.630 "params": { 00:05:13.630 "trtype": "TCP", 00:05:13.630 "max_queue_depth": 128, 00:05:13.630 "max_io_qpairs_per_ctrlr": 127, 00:05:13.630 "in_capsule_data_size": 4096, 00:05:13.630 "max_io_size": 131072, 00:05:13.630 "io_unit_size": 131072, 00:05:13.630 "max_aq_depth": 128, 00:05:13.630 "num_shared_buffers": 511, 00:05:13.630 "buf_cache_size": 4294967295, 00:05:13.630 "dif_insert_or_strip": false, 00:05:13.630 "zcopy": false, 00:05:13.630 "c2h_success": true, 00:05:13.630 "sock_priority": 0, 00:05:13.630 "abort_timeout_sec": 1, 00:05:13.630 "ack_timeout": 0, 00:05:13.630 "data_wr_pool_size": 0 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "subsystem": "iscsi", 00:05:13.630 "config": [ 00:05:13.630 { 00:05:13.630 "method": "iscsi_set_options", 00:05:13.630 "params": { 00:05:13.630 "node_base": "iqn.2016-06.io.spdk", 00:05:13.630 "max_sessions": 128, 00:05:13.630 "max_connections_per_session": 2, 00:05:13.630 "max_queue_depth": 64, 00:05:13.630 "default_time2wait": 2, 00:05:13.630 "default_time2retain": 20, 00:05:13.630 "first_burst_length": 8192, 00:05:13.630 "immediate_data": true, 00:05:13.630 "allow_duplicated_isid": false, 00:05:13.630 "error_recovery_level": 0, 00:05:13.630 "nop_timeout": 60, 00:05:13.630 "nop_in_interval": 30, 00:05:13.630 "disable_chap": false, 00:05:13.630 "require_chap": false, 00:05:13.630 "mutual_chap": false, 00:05:13.630 "chap_group": 0, 00:05:13.630 "max_large_datain_per_connection": 64, 00:05:13.630 "max_r2t_per_connection": 4, 00:05:13.630 "pdu_pool_size": 36864, 00:05:13.630 "immediate_data_pool_size": 16384, 00:05:13.630 "data_out_pool_size": 2048 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 } 00:05:13.630 ] 00:05:13.630 } 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 774434 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 774434 ']' 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 774434 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774434 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774434' 00:05:13.630 killing process with pid 774434 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 774434 00:05:13.630 05:55:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 774434 00:05:13.889 05:55:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=774473 00:05:13.889 05:55:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.889 05:55:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 774473 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 774473 ']' 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 774473 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774473 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774473' 00:05:19.154 killing process with pid 774473 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 774473 00:05:19.154 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 774473 00:05:19.412 05:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.412 05:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:19.412 00:05:19.412 real 0m6.233s 00:05:19.412 user 0m5.930s 00:05:19.412 sys 0m0.600s 00:05:19.412 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.412 05:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.412 ************************************ 00:05:19.412 END TEST skip_rpc_with_json 00:05:19.412 ************************************ 00:05:19.413 05:55:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.413 ************************************ 00:05:19.413 START TEST skip_rpc_with_delay 00:05:19.413 ************************************ 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.413 [2024-12-15 05:55:39.483732] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.413 00:05:19.413 real 0m0.070s 00:05:19.413 user 0m0.038s 00:05:19.413 sys 0m0.031s 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.413 05:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:19.413 ************************************ 00:05:19.413 END TEST skip_rpc_with_delay 00:05:19.413 ************************************ 00:05:19.413 05:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.413 05:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.413 05:55:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.413 05:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.672 ************************************ 00:05:19.672 START TEST exit_on_failed_rpc_init 00:05:19.672 ************************************ 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=775506 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 775506 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 775506 ']' 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.672 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.672 [2024-12-15 05:55:39.623810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:19.672 [2024-12-15 05:55:39.623853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775506 ] 00:05:19.672 [2024-12-15 05:55:39.697545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.672 [2024-12-15 05:55:39.720593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:19.931 05:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:19.931 [2024-12-15 05:55:39.982765] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:19.931 [2024-12-15 05:55:39.982808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775624 ] 00:05:19.931 [2024-12-15 05:55:40.060590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.189 [2024-12-15 05:55:40.083607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.190 [2024-12-15 05:55:40.083665] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:20.190 [2024-12-15 05:55:40.083675] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:20.190 [2024-12-15 05:55:40.083681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 775506 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 775506 ']' 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 775506 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775506 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775506' 00:05:20.190 killing process with pid 775506 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 775506 00:05:20.190 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 775506 00:05:20.449 00:05:20.449 real 0m0.902s 00:05:20.449 user 0m0.927s 00:05:20.449 sys 0m0.401s 00:05:20.449 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.449 05:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.449 ************************************ 00:05:20.449 END TEST exit_on_failed_rpc_init 00:05:20.449 ************************************ 00:05:20.449 05:55:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.449 00:05:20.449 real 0m13.023s 00:05:20.449 user 0m12.229s 00:05:20.449 sys 0m1.577s 00:05:20.449 05:55:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.449 05:55:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.449 ************************************ 00:05:20.449 END TEST skip_rpc 00:05:20.449 ************************************ 00:05:20.449 05:55:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:20.449 05:55:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.449 05:55:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.449 05:55:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.449 ************************************ 00:05:20.449 START TEST rpc_client 00:05:20.449 ************************************ 00:05:20.449 05:55:40 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:20.707 * Looking for test storage... 00:05:20.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.708 05:55:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.708 --rc genhtml_branch_coverage=1 00:05:20.708 --rc genhtml_function_coverage=1 00:05:20.708 --rc genhtml_legend=1 00:05:20.708 --rc geninfo_all_blocks=1 00:05:20.708 --rc geninfo_unexecuted_blocks=1 00:05:20.708 00:05:20.708 ' 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.708 --rc genhtml_branch_coverage=1 00:05:20.708 --rc genhtml_function_coverage=1 00:05:20.708 --rc genhtml_legend=1 00:05:20.708 --rc geninfo_all_blocks=1 00:05:20.708 --rc geninfo_unexecuted_blocks=1 00:05:20.708 00:05:20.708 ' 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.708 --rc genhtml_branch_coverage=1 00:05:20.708 --rc genhtml_function_coverage=1 00:05:20.708 --rc genhtml_legend=1 00:05:20.708 --rc geninfo_all_blocks=1 00:05:20.708 --rc geninfo_unexecuted_blocks=1 00:05:20.708 00:05:20.708 ' 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.708 --rc genhtml_branch_coverage=1 00:05:20.708 --rc genhtml_function_coverage=1 00:05:20.708 --rc genhtml_legend=1 00:05:20.708 --rc geninfo_all_blocks=1 00:05:20.708 --rc geninfo_unexecuted_blocks=1 00:05:20.708 00:05:20.708 ' 00:05:20.708 05:55:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:20.708 OK 00:05:20.708 05:55:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:20.708 00:05:20.708 real 0m0.202s 00:05:20.708 user 0m0.112s 00:05:20.708 sys 0m0.103s 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.708 05:55:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:20.708 ************************************ 00:05:20.708 END TEST rpc_client 00:05:20.708 ************************************ 00:05:20.708 05:55:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:20.708 05:55:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.708 05:55:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.708 05:55:40 -- common/autotest_common.sh@10 -- # set +x 00:05:20.967 ************************************ 00:05:20.967 START TEST json_config 00:05:20.967 ************************************ 00:05:20.967 05:55:40 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:20.967 05:55:40 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.967 05:55:40 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.967 05:55:40 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.967 05:55:40 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.967 05:55:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.967 05:55:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.967 05:55:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.967 05:55:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.967 05:55:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.967 05:55:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.967 05:55:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.967 05:55:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.967 05:55:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.967 05:55:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.968 05:55:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:20.968 05:55:40 json_config -- scripts/common.sh@345 -- # : 1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.968 05:55:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.968 05:55:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@353 -- # local d=1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.968 05:55:40 json_config -- scripts/common.sh@355 -- # echo 1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.968 05:55:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:20.968 05:55:40 json_config -- scripts/common.sh@353 -- # local d=2 00:05:20.968 05:55:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.968 05:55:40 json_config -- scripts/common.sh@355 -- # echo 2 00:05:20.968 05:55:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.968 05:55:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.968 05:55:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.968 05:55:40 json_config -- scripts/common.sh@368 -- # return 0 00:05:20.968 05:55:40 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.968 05:55:40 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.968 --rc genhtml_branch_coverage=1 00:05:20.968 --rc genhtml_function_coverage=1 00:05:20.968 --rc genhtml_legend=1 00:05:20.968 --rc geninfo_all_blocks=1 00:05:20.968 --rc geninfo_unexecuted_blocks=1 00:05:20.968 00:05:20.968 ' 00:05:20.968 05:55:40 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.968 --rc genhtml_branch_coverage=1 00:05:20.968 --rc genhtml_function_coverage=1 00:05:20.968 --rc genhtml_legend=1 00:05:20.968 --rc geninfo_all_blocks=1 00:05:20.968 --rc geninfo_unexecuted_blocks=1 00:05:20.968 00:05:20.968 ' 00:05:20.968 05:55:40 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.968 --rc genhtml_branch_coverage=1 00:05:20.968 --rc genhtml_function_coverage=1 00:05:20.968 --rc genhtml_legend=1 00:05:20.968 --rc geninfo_all_blocks=1 00:05:20.968 --rc geninfo_unexecuted_blocks=1 00:05:20.968 00:05:20.968 ' 00:05:20.968 05:55:40 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.968 --rc genhtml_branch_coverage=1 00:05:20.968 --rc genhtml_function_coverage=1 00:05:20.968 --rc genhtml_legend=1 00:05:20.968 --rc geninfo_all_blocks=1 00:05:20.968 --rc geninfo_unexecuted_blocks=1 00:05:20.968 00:05:20.968 ' 00:05:20.968 05:55:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.968 05:55:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.968 05:55:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.968 05:55:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.968 05:55:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.968 05:55:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.968 05:55:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.968 05:55:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.968 05:55:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:20.968 05:55:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@51 -- # : 0 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.968 05:55:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:20.968 INFO: JSON configuration test init 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.968 05:55:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:20.968 05:55:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.968 05:55:41 json_config -- json_config/common.sh@10 -- # shift 00:05:20.968 05:55:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.968 05:55:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.968 05:55:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.968 05:55:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.968 05:55:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.968 05:55:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775948 00:05:20.968 05:55:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.968 Waiting for target to run... 00:05:20.968 05:55:41 json_config -- json_config/common.sh@25 -- # waitforlisten 775948 /var/tmp/spdk_tgt.sock 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 775948 ']' 00:05:20.968 05:55:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.968 05:55:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:20.969 05:55:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.969 05:55:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.969 05:55:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.969 05:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.969 [2024-12-15 05:55:41.098647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:20.969 [2024-12-15 05:55:41.098695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775948 ] 00:05:21.536 [2024-12-15 05:55:41.553309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.536 [2024-12-15 05:55:41.575610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:21.795 05:55:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.795 00:05:21.795 05:55:41 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:21.795 05:55:41 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 05:55:41 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:21.795 05:55:41 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.795 05:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.053 05:55:41 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:22.053 05:55:41 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:22.053 05:55:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:25.343 05:55:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@54 -- # sort 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.343 05:55:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:25.343 05:55:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:25.343 05:55:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:25.343 MallocForNvmf0 00:05:25.602 05:55:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:25.602 05:55:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:25.602 MallocForNvmf1 00:05:25.602 05:55:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.602 05:55:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.860 [2024-12-15 05:55:45.844933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.860 05:55:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.860 05:55:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.119 05:55:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.119 05:55:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.378 05:55:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.378 05:55:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.378 05:55:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:26.378 05:55:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:26.636 [2024-12-15 05:55:46.651323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.636 05:55:46 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:26.636 05:55:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.636 05:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.636 05:55:46 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:26.636 05:55:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.636 05:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.636 05:55:46 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:26.636 05:55:46 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.636 05:55:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.895 MallocBdevForConfigChangeCheck 00:05:26.895 05:55:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:26.895 05:55:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.895 05:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.895 05:55:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:26.895 05:55:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.463 05:55:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:27.463 INFO: shutting down applications... 00:05:27.463 05:55:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:27.463 05:55:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:27.463 05:55:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:27.463 05:55:47 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:28.839 Calling clear_iscsi_subsystem 00:05:28.839 Calling clear_nvmf_subsystem 00:05:28.839 Calling clear_nbd_subsystem 00:05:28.839 Calling clear_ublk_subsystem 00:05:28.839 Calling clear_vhost_blk_subsystem 00:05:28.839 Calling clear_vhost_scsi_subsystem 00:05:28.839 Calling clear_bdev_subsystem 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:28.839 05:55:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:29.410 05:55:49 json_config -- json_config/json_config.sh@352 -- # break 00:05:29.410 05:55:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:29.410 05:55:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:29.410 05:55:49 json_config -- json_config/common.sh@31 -- # local app=target 00:05:29.410 05:55:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.410 05:55:49 json_config -- json_config/common.sh@35 -- # [[ -n 775948 ]] 00:05:29.410 05:55:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 775948 00:05:29.410 05:55:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.410 05:55:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.410 05:55:49 json_config -- json_config/common.sh@41 -- # kill -0 775948 00:05:29.410 05:55:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.721 05:55:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.721 05:55:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.721 05:55:49 json_config -- json_config/common.sh@41 -- # kill -0 775948 00:05:29.721 05:55:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.721 05:55:49 json_config -- json_config/common.sh@43 -- # break 00:05:29.721 05:55:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.721 05:55:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.721 SPDK target shutdown done 00:05:29.721 05:55:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:29.721 INFO: relaunching applications... 00:05:29.721 05:55:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.721 05:55:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.721 05:55:49 json_config -- json_config/common.sh@10 -- # shift 00:05:29.721 05:55:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.721 05:55:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.721 05:55:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.721 05:55:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.721 05:55:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.721 05:55:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=777448 00:05:29.721 05:55:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.721 Waiting for target to run... 00:05:29.721 05:55:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.721 05:55:49 json_config -- json_config/common.sh@25 -- # waitforlisten 777448 /var/tmp/spdk_tgt.sock 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 777448 ']' 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.721 05:55:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.049 [2024-12-15 05:55:49.899111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:30.049 [2024-12-15 05:55:49.899164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777448 ] 00:05:30.323 [2024-12-15 05:55:50.197931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.323 [2024-12-15 05:55:50.211424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.613 [2024-12-15 05:55:53.219354] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.613 [2024-12-15 05:55:53.251636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.613 05:55:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.613 05:55:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:33.613 05:55:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.613 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:33.613 INFO: Checking if target configuration is the same... 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:33.613 05:55:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.613 + '[' 2 -ne 2 ']' 00:05:33.613 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.613 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.613 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.613 +++ basename /dev/fd/62 00:05:33.613 ++ mktemp /tmp/62.XXX 00:05:33.613 + tmp_file_1=/tmp/62.7EE 00:05:33.613 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.613 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.613 + tmp_file_2=/tmp/spdk_tgt_config.json.HGa 00:05:33.613 + ret=0 00:05:33.613 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.613 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.613 + diff -u /tmp/62.7EE /tmp/spdk_tgt_config.json.HGa 00:05:33.613 + echo 'INFO: JSON config files are the same' 00:05:33.613 INFO: JSON config files are the same 00:05:33.613 + rm /tmp/62.7EE /tmp/spdk_tgt_config.json.HGa 00:05:33.613 + exit 0 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:33.613 INFO: changing configuration and checking if this can be detected... 00:05:33.613 05:55:53 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.613 05:55:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.872 05:55:53 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:33.872 05:55:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.872 05:55:53 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.872 + '[' 2 -ne 2 ']' 00:05:33.872 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:33.872 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:33.872 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:33.872 +++ basename /dev/fd/62 00:05:33.872 ++ mktemp /tmp/62.XXX 00:05:33.872 + tmp_file_1=/tmp/62.BQx 00:05:33.872 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.872 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.872 + tmp_file_2=/tmp/spdk_tgt_config.json.yW2 00:05:33.872 + ret=0 00:05:33.872 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.131 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:34.389 + diff -u /tmp/62.BQx /tmp/spdk_tgt_config.json.yW2 00:05:34.389 + ret=1 00:05:34.389 + echo '=== Start of file: /tmp/62.BQx ===' 00:05:34.389 + cat /tmp/62.BQx 00:05:34.389 + echo '=== End of file: /tmp/62.BQx ===' 00:05:34.389 + echo '' 00:05:34.389 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yW2 ===' 00:05:34.389 + cat /tmp/spdk_tgt_config.json.yW2 00:05:34.390 + echo '=== End of file: /tmp/spdk_tgt_config.json.yW2 ===' 00:05:34.390 + echo '' 00:05:34.390 + rm /tmp/62.BQx /tmp/spdk_tgt_config.json.yW2 00:05:34.390 + exit 1 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:34.390 INFO: configuration change detected. 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@324 -- # [[ -n 777448 ]] 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.390 05:55:54 json_config -- json_config/json_config.sh@330 -- # killprocess 777448 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@954 -- # '[' -z 777448 ']' 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@958 -- # kill -0 777448 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@959 -- # uname 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777448 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777448' 00:05:34.390 killing process with pid 777448 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@973 -- # kill 777448 00:05:34.390 05:55:54 json_config -- common/autotest_common.sh@978 -- # wait 777448 00:05:35.763 05:55:55 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:35.763 05:55:55 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:35.763 05:55:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.763 05:55:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.763 05:55:55 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:35.763 05:55:55 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:35.763 INFO: Success 00:05:35.763 00:05:35.763 real 0m15.016s 00:05:35.763 user 0m16.063s 00:05:35.763 sys 0m1.971s 00:05:35.763 05:55:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.763 05:55:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.763 ************************************ 00:05:35.763 END TEST json_config 00:05:35.763 ************************************ 00:05:36.023 05:55:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.023 05:55:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.023 05:55:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:36.023 ************************************ 00:05:36.023 START TEST json_config_extra_key 00:05:36.023 ************************************ 00:05:36.023 05:55:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.023 05:55:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.023 --rc genhtml_branch_coverage=1 00:05:36.023 --rc genhtml_function_coverage=1 00:05:36.023 --rc genhtml_legend=1 00:05:36.023 --rc geninfo_all_blocks=1 00:05:36.023 --rc geninfo_unexecuted_blocks=1 00:05:36.023 00:05:36.023 ' 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.023 --rc genhtml_branch_coverage=1 00:05:36.023 --rc genhtml_function_coverage=1 00:05:36.023 --rc genhtml_legend=1 00:05:36.023 --rc geninfo_all_blocks=1 00:05:36.023 --rc geninfo_unexecuted_blocks=1 00:05:36.023 00:05:36.023 ' 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.023 --rc genhtml_branch_coverage=1 00:05:36.023 --rc genhtml_function_coverage=1 00:05:36.023 --rc genhtml_legend=1 00:05:36.023 --rc geninfo_all_blocks=1 00:05:36.023 --rc geninfo_unexecuted_blocks=1 00:05:36.023 00:05:36.023 ' 00:05:36.023 05:55:56 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.023 --rc genhtml_branch_coverage=1 00:05:36.023 --rc genhtml_function_coverage=1 00:05:36.023 --rc genhtml_legend=1 00:05:36.023 --rc geninfo_all_blocks=1 00:05:36.023 --rc geninfo_unexecuted_blocks=1 00:05:36.023 00:05:36.023 ' 00:05:36.023 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.023 05:55:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.024 05:55:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.024 05:55:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.024 05:55:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.024 05:55:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.024 05:55:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.024 05:55:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.024 05:55:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.024 05:55:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:36.024 05:55:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.024 05:55:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:36.024 INFO: launching applications... 00:05:36.024 05:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=778693 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.024 Waiting for target to run... 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 778693 /var/tmp/spdk_tgt.sock 00:05:36.024 05:55:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 778693 ']' 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.024 05:55:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.283 [2024-12-15 05:55:56.178584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:36.283 [2024-12-15 05:55:56.178632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778693 ] 00:05:36.542 [2024-12-15 05:55:56.630016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.542 [2024-12-15 05:55:56.652083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.110 05:55:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.110 05:55:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.110 00:05:37.110 05:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.110 INFO: shutting down applications... 00:05:37.110 05:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 778693 ]] 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 778693 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778693 00:05:37.110 05:55:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778693 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.678 05:55:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.678 SPDK target shutdown done 00:05:37.678 05:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.678 Success 00:05:37.678 00:05:37.678 real 0m1.572s 00:05:37.678 user 0m1.173s 00:05:37.678 sys 0m0.573s 00:05:37.678 05:55:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.678 05:55:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.678 ************************************ 00:05:37.678 END TEST json_config_extra_key 00:05:37.678 ************************************ 00:05:37.678 05:55:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.678 05:55:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.678 05:55:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.678 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:05:37.678 ************************************ 00:05:37.679 START TEST alias_rpc 00:05:37.679 ************************************ 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.679 * Looking for test storage... 00:05:37.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.679 05:55:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.679 --rc genhtml_branch_coverage=1 00:05:37.679 --rc genhtml_function_coverage=1 00:05:37.679 --rc genhtml_legend=1 00:05:37.679 --rc geninfo_all_blocks=1 00:05:37.679 --rc geninfo_unexecuted_blocks=1 00:05:37.679 00:05:37.679 ' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.679 --rc genhtml_branch_coverage=1 00:05:37.679 --rc genhtml_function_coverage=1 00:05:37.679 --rc genhtml_legend=1 00:05:37.679 --rc geninfo_all_blocks=1 00:05:37.679 --rc geninfo_unexecuted_blocks=1 00:05:37.679 00:05:37.679 ' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.679 --rc genhtml_branch_coverage=1 00:05:37.679 --rc genhtml_function_coverage=1 00:05:37.679 --rc genhtml_legend=1 00:05:37.679 --rc geninfo_all_blocks=1 00:05:37.679 --rc geninfo_unexecuted_blocks=1 00:05:37.679 00:05:37.679 ' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.679 --rc genhtml_branch_coverage=1 00:05:37.679 --rc genhtml_function_coverage=1 00:05:37.679 --rc genhtml_legend=1 00:05:37.679 --rc geninfo_all_blocks=1 00:05:37.679 --rc geninfo_unexecuted_blocks=1 00:05:37.679 00:05:37.679 ' 00:05:37.679 05:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.679 05:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=778976 00:05:37.679 05:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 778976 00:05:37.679 05:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 778976 ']' 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.679 05:55:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.679 [2024-12-15 05:55:57.812984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:37.679 [2024-12-15 05:55:57.813035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778976 ] 00:05:37.938 [2024-12-15 05:55:57.888176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.938 [2024-12-15 05:55:57.910656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.197 05:55:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:38.197 05:55:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 778976 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 778976 ']' 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 778976 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.197 05:55:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778976 00:05:38.456 05:55:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.456 05:55:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.456 05:55:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778976' 00:05:38.456 killing process with pid 778976 00:05:38.456 05:55:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 778976 00:05:38.456 05:55:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 778976 00:05:38.715 00:05:38.715 real 0m1.083s 00:05:38.715 user 0m1.112s 00:05:38.715 sys 0m0.411s 00:05:38.715 05:55:58 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.715 05:55:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.715 ************************************ 00:05:38.715 END TEST alias_rpc 00:05:38.715 ************************************ 00:05:38.715 05:55:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:38.715 05:55:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:38.715 05:55:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.715 05:55:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.715 05:55:58 -- common/autotest_common.sh@10 -- # set +x 00:05:38.715 ************************************ 00:05:38.715 START TEST spdkcli_tcp 00:05:38.715 ************************************ 00:05:38.715 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:38.715 * Looking for test storage... 00:05:38.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:38.715 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:38.715 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:38.715 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:38.974 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.975 05:55:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:38.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.975 --rc genhtml_branch_coverage=1 00:05:38.975 --rc genhtml_function_coverage=1 00:05:38.975 --rc genhtml_legend=1 00:05:38.975 --rc geninfo_all_blocks=1 00:05:38.975 --rc geninfo_unexecuted_blocks=1 00:05:38.975 00:05:38.975 ' 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:38.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.975 --rc genhtml_branch_coverage=1 00:05:38.975 --rc genhtml_function_coverage=1 00:05:38.975 --rc genhtml_legend=1 00:05:38.975 --rc geninfo_all_blocks=1 00:05:38.975 --rc geninfo_unexecuted_blocks=1 00:05:38.975 00:05:38.975 ' 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:38.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.975 --rc genhtml_branch_coverage=1 00:05:38.975 --rc genhtml_function_coverage=1 00:05:38.975 --rc genhtml_legend=1 00:05:38.975 --rc geninfo_all_blocks=1 00:05:38.975 --rc geninfo_unexecuted_blocks=1 00:05:38.975 00:05:38.975 ' 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:38.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.975 --rc genhtml_branch_coverage=1 00:05:38.975 --rc genhtml_function_coverage=1 00:05:38.975 --rc genhtml_legend=1 00:05:38.975 --rc geninfo_all_blocks=1 00:05:38.975 --rc geninfo_unexecuted_blocks=1 00:05:38.975 00:05:38.975 ' 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=779259 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 779259 00:05:38.975 05:55:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 779259 ']' 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.975 05:55:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 [2024-12-15 05:55:58.973709] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:38.975 [2024-12-15 05:55:58.973755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779259 ] 00:05:38.975 [2024-12-15 05:55:59.049312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.975 [2024-12-15 05:55:59.073068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.975 [2024-12-15 05:55:59.073070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.234 05:55:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.234 05:55:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:39.234 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=779271 00:05:39.234 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:39.234 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:39.494 [ 00:05:39.494 "bdev_malloc_delete", 00:05:39.494 "bdev_malloc_create", 00:05:39.494 "bdev_null_resize", 00:05:39.494 "bdev_null_delete", 00:05:39.494 "bdev_null_create", 00:05:39.494 "bdev_nvme_cuse_unregister", 00:05:39.494 "bdev_nvme_cuse_register", 00:05:39.494 "bdev_opal_new_user", 00:05:39.494 "bdev_opal_set_lock_state", 00:05:39.494 "bdev_opal_delete", 00:05:39.494 "bdev_opal_get_info", 00:05:39.494 "bdev_opal_create", 00:05:39.494 "bdev_nvme_opal_revert", 00:05:39.494 "bdev_nvme_opal_init", 00:05:39.494 "bdev_nvme_send_cmd", 00:05:39.494 "bdev_nvme_set_keys", 00:05:39.494 "bdev_nvme_get_path_iostat", 00:05:39.494 "bdev_nvme_get_mdns_discovery_info", 00:05:39.494 "bdev_nvme_stop_mdns_discovery", 00:05:39.494 "bdev_nvme_start_mdns_discovery", 00:05:39.494 "bdev_nvme_set_multipath_policy", 00:05:39.494 "bdev_nvme_set_preferred_path", 00:05:39.494 "bdev_nvme_get_io_paths", 00:05:39.494 "bdev_nvme_remove_error_injection", 00:05:39.494 "bdev_nvme_add_error_injection", 00:05:39.494 "bdev_nvme_get_discovery_info", 00:05:39.494 "bdev_nvme_stop_discovery", 00:05:39.494 "bdev_nvme_start_discovery", 00:05:39.494 "bdev_nvme_get_controller_health_info", 00:05:39.494 "bdev_nvme_disable_controller", 00:05:39.494 "bdev_nvme_enable_controller", 00:05:39.494 "bdev_nvme_reset_controller", 00:05:39.494 "bdev_nvme_get_transport_statistics", 00:05:39.494 "bdev_nvme_apply_firmware", 00:05:39.494 "bdev_nvme_detach_controller", 00:05:39.494 "bdev_nvme_get_controllers", 00:05:39.494 "bdev_nvme_attach_controller", 00:05:39.494 "bdev_nvme_set_hotplug", 00:05:39.494 "bdev_nvme_set_options", 00:05:39.494 "bdev_passthru_delete", 00:05:39.494 "bdev_passthru_create", 00:05:39.494 "bdev_lvol_set_parent_bdev", 00:05:39.494 "bdev_lvol_set_parent", 00:05:39.494 "bdev_lvol_check_shallow_copy", 00:05:39.494 "bdev_lvol_start_shallow_copy", 00:05:39.494 "bdev_lvol_grow_lvstore", 00:05:39.494 "bdev_lvol_get_lvols", 00:05:39.494 "bdev_lvol_get_lvstores", 00:05:39.494 "bdev_lvol_delete", 00:05:39.494 "bdev_lvol_set_read_only", 00:05:39.494 "bdev_lvol_resize", 00:05:39.494 "bdev_lvol_decouple_parent", 00:05:39.494 "bdev_lvol_inflate", 00:05:39.494 "bdev_lvol_rename", 00:05:39.494 "bdev_lvol_clone_bdev", 00:05:39.494 "bdev_lvol_clone", 00:05:39.494 "bdev_lvol_snapshot", 00:05:39.494 "bdev_lvol_create", 00:05:39.494 "bdev_lvol_delete_lvstore", 00:05:39.494 "bdev_lvol_rename_lvstore", 00:05:39.494 "bdev_lvol_create_lvstore", 00:05:39.494 "bdev_raid_set_options", 00:05:39.494 "bdev_raid_remove_base_bdev", 00:05:39.494 "bdev_raid_add_base_bdev", 00:05:39.494 "bdev_raid_delete", 00:05:39.494 "bdev_raid_create", 00:05:39.494 "bdev_raid_get_bdevs", 00:05:39.494 "bdev_error_inject_error", 00:05:39.494 "bdev_error_delete", 00:05:39.494 "bdev_error_create", 00:05:39.494 "bdev_split_delete", 00:05:39.494 "bdev_split_create", 00:05:39.494 "bdev_delay_delete", 00:05:39.494 "bdev_delay_create", 00:05:39.494 "bdev_delay_update_latency", 00:05:39.494 "bdev_zone_block_delete", 00:05:39.494 "bdev_zone_block_create", 00:05:39.494 "blobfs_create", 00:05:39.494 "blobfs_detect", 00:05:39.494 "blobfs_set_cache_size", 00:05:39.494 "bdev_aio_delete", 00:05:39.494 "bdev_aio_rescan", 00:05:39.494 "bdev_aio_create", 00:05:39.494 "bdev_ftl_set_property", 00:05:39.494 "bdev_ftl_get_properties", 00:05:39.494 "bdev_ftl_get_stats", 00:05:39.494 "bdev_ftl_unmap", 00:05:39.494 "bdev_ftl_unload", 00:05:39.494 "bdev_ftl_delete", 00:05:39.494 "bdev_ftl_load", 00:05:39.494 "bdev_ftl_create", 00:05:39.494 "bdev_virtio_attach_controller", 00:05:39.494 "bdev_virtio_scsi_get_devices", 00:05:39.494 "bdev_virtio_detach_controller", 00:05:39.494 "bdev_virtio_blk_set_hotplug", 00:05:39.494 "bdev_iscsi_delete", 00:05:39.494 "bdev_iscsi_create", 00:05:39.494 "bdev_iscsi_set_options", 00:05:39.494 "accel_error_inject_error", 00:05:39.494 "ioat_scan_accel_module", 00:05:39.494 "dsa_scan_accel_module", 00:05:39.494 "iaa_scan_accel_module", 00:05:39.494 "vfu_virtio_create_fs_endpoint", 00:05:39.494 "vfu_virtio_create_scsi_endpoint", 00:05:39.494 "vfu_virtio_scsi_remove_target", 00:05:39.494 "vfu_virtio_scsi_add_target", 00:05:39.494 "vfu_virtio_create_blk_endpoint", 00:05:39.494 "vfu_virtio_delete_endpoint", 00:05:39.494 "keyring_file_remove_key", 00:05:39.494 "keyring_file_add_key", 00:05:39.494 "keyring_linux_set_options", 00:05:39.494 "fsdev_aio_delete", 00:05:39.494 "fsdev_aio_create", 00:05:39.494 "iscsi_get_histogram", 00:05:39.494 "iscsi_enable_histogram", 00:05:39.494 "iscsi_set_options", 00:05:39.494 "iscsi_get_auth_groups", 00:05:39.494 "iscsi_auth_group_remove_secret", 00:05:39.494 "iscsi_auth_group_add_secret", 00:05:39.494 "iscsi_delete_auth_group", 00:05:39.494 "iscsi_create_auth_group", 00:05:39.494 "iscsi_set_discovery_auth", 00:05:39.494 "iscsi_get_options", 00:05:39.494 "iscsi_target_node_request_logout", 00:05:39.494 "iscsi_target_node_set_redirect", 00:05:39.494 "iscsi_target_node_set_auth", 00:05:39.494 "iscsi_target_node_add_lun", 00:05:39.494 "iscsi_get_stats", 00:05:39.494 "iscsi_get_connections", 00:05:39.494 "iscsi_portal_group_set_auth", 00:05:39.494 "iscsi_start_portal_group", 00:05:39.494 "iscsi_delete_portal_group", 00:05:39.494 "iscsi_create_portal_group", 00:05:39.494 "iscsi_get_portal_groups", 00:05:39.494 "iscsi_delete_target_node", 00:05:39.494 "iscsi_target_node_remove_pg_ig_maps", 00:05:39.494 "iscsi_target_node_add_pg_ig_maps", 00:05:39.494 "iscsi_create_target_node", 00:05:39.494 "iscsi_get_target_nodes", 00:05:39.494 "iscsi_delete_initiator_group", 00:05:39.494 "iscsi_initiator_group_remove_initiators", 00:05:39.494 "iscsi_initiator_group_add_initiators", 00:05:39.494 "iscsi_create_initiator_group", 00:05:39.494 "iscsi_get_initiator_groups", 00:05:39.494 "nvmf_set_crdt", 00:05:39.494 "nvmf_set_config", 00:05:39.494 "nvmf_set_max_subsystems", 00:05:39.494 "nvmf_stop_mdns_prr", 00:05:39.494 "nvmf_publish_mdns_prr", 00:05:39.494 "nvmf_subsystem_get_listeners", 00:05:39.494 "nvmf_subsystem_get_qpairs", 00:05:39.494 "nvmf_subsystem_get_controllers", 00:05:39.494 "nvmf_get_stats", 00:05:39.494 "nvmf_get_transports", 00:05:39.494 "nvmf_create_transport", 00:05:39.494 "nvmf_get_targets", 00:05:39.494 "nvmf_delete_target", 00:05:39.494 "nvmf_create_target", 00:05:39.494 "nvmf_subsystem_allow_any_host", 00:05:39.494 "nvmf_subsystem_set_keys", 00:05:39.494 "nvmf_subsystem_remove_host", 00:05:39.494 "nvmf_subsystem_add_host", 00:05:39.494 "nvmf_ns_remove_host", 00:05:39.494 "nvmf_ns_add_host", 00:05:39.494 "nvmf_subsystem_remove_ns", 00:05:39.494 "nvmf_subsystem_set_ns_ana_group", 00:05:39.494 "nvmf_subsystem_add_ns", 00:05:39.494 "nvmf_subsystem_listener_set_ana_state", 00:05:39.494 "nvmf_discovery_get_referrals", 00:05:39.494 "nvmf_discovery_remove_referral", 00:05:39.494 "nvmf_discovery_add_referral", 00:05:39.494 "nvmf_subsystem_remove_listener", 00:05:39.494 "nvmf_subsystem_add_listener", 00:05:39.494 "nvmf_delete_subsystem", 00:05:39.494 "nvmf_create_subsystem", 00:05:39.494 "nvmf_get_subsystems", 00:05:39.494 "env_dpdk_get_mem_stats", 00:05:39.494 "nbd_get_disks", 00:05:39.494 "nbd_stop_disk", 00:05:39.494 "nbd_start_disk", 00:05:39.494 "ublk_recover_disk", 00:05:39.494 "ublk_get_disks", 00:05:39.494 "ublk_stop_disk", 00:05:39.494 "ublk_start_disk", 00:05:39.494 "ublk_destroy_target", 00:05:39.494 "ublk_create_target", 00:05:39.494 "virtio_blk_create_transport", 00:05:39.494 "virtio_blk_get_transports", 00:05:39.494 "vhost_controller_set_coalescing", 00:05:39.494 "vhost_get_controllers", 00:05:39.494 "vhost_delete_controller", 00:05:39.494 "vhost_create_blk_controller", 00:05:39.494 "vhost_scsi_controller_remove_target", 00:05:39.494 "vhost_scsi_controller_add_target", 00:05:39.494 "vhost_start_scsi_controller", 00:05:39.494 "vhost_create_scsi_controller", 00:05:39.494 "thread_set_cpumask", 00:05:39.494 "scheduler_set_options", 00:05:39.494 "framework_get_governor", 00:05:39.494 "framework_get_scheduler", 00:05:39.494 "framework_set_scheduler", 00:05:39.494 "framework_get_reactors", 00:05:39.494 "thread_get_io_channels", 00:05:39.494 "thread_get_pollers", 00:05:39.494 "thread_get_stats", 00:05:39.494 "framework_monitor_context_switch", 00:05:39.494 "spdk_kill_instance", 00:05:39.495 "log_enable_timestamps", 00:05:39.495 "log_get_flags", 00:05:39.495 "log_clear_flag", 00:05:39.495 "log_set_flag", 00:05:39.495 "log_get_level", 00:05:39.495 "log_set_level", 00:05:39.495 "log_get_print_level", 00:05:39.495 "log_set_print_level", 00:05:39.495 "framework_enable_cpumask_locks", 00:05:39.495 "framework_disable_cpumask_locks", 00:05:39.495 "framework_wait_init", 00:05:39.495 "framework_start_init", 00:05:39.495 "scsi_get_devices", 00:05:39.495 "bdev_get_histogram", 00:05:39.495 "bdev_enable_histogram", 00:05:39.495 "bdev_set_qos_limit", 00:05:39.495 "bdev_set_qd_sampling_period", 00:05:39.495 "bdev_get_bdevs", 00:05:39.495 "bdev_reset_iostat", 00:05:39.495 "bdev_get_iostat", 00:05:39.495 "bdev_examine", 00:05:39.495 "bdev_wait_for_examine", 00:05:39.495 "bdev_set_options", 00:05:39.495 "accel_get_stats", 00:05:39.495 "accel_set_options", 00:05:39.495 "accel_set_driver", 00:05:39.495 "accel_crypto_key_destroy", 00:05:39.495 "accel_crypto_keys_get", 00:05:39.495 "accel_crypto_key_create", 00:05:39.495 "accel_assign_opc", 00:05:39.495 "accel_get_module_info", 00:05:39.495 "accel_get_opc_assignments", 00:05:39.495 "vmd_rescan", 00:05:39.495 "vmd_remove_device", 00:05:39.495 "vmd_enable", 00:05:39.495 "sock_get_default_impl", 00:05:39.495 "sock_set_default_impl", 00:05:39.495 "sock_impl_set_options", 00:05:39.495 "sock_impl_get_options", 00:05:39.495 "iobuf_get_stats", 00:05:39.495 "iobuf_set_options", 00:05:39.495 "keyring_get_keys", 00:05:39.495 "vfu_tgt_set_base_path", 00:05:39.495 "framework_get_pci_devices", 00:05:39.495 "framework_get_config", 00:05:39.495 "framework_get_subsystems", 00:05:39.495 "fsdev_set_opts", 00:05:39.495 "fsdev_get_opts", 00:05:39.495 "trace_get_info", 00:05:39.495 "trace_get_tpoint_group_mask", 00:05:39.495 "trace_disable_tpoint_group", 00:05:39.495 "trace_enable_tpoint_group", 00:05:39.495 "trace_clear_tpoint_mask", 00:05:39.495 "trace_set_tpoint_mask", 00:05:39.495 "notify_get_notifications", 00:05:39.495 "notify_get_types", 00:05:39.495 "spdk_get_version", 00:05:39.495 "rpc_get_methods" 00:05:39.495 ] 00:05:39.495 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.495 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:39.495 05:55:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 779259 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 779259 ']' 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 779259 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779259 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779259' 00:05:39.495 killing process with pid 779259 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 779259 00:05:39.495 05:55:59 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 779259 00:05:39.754 00:05:39.754 real 0m1.094s 00:05:39.754 user 0m1.838s 00:05:39.754 sys 0m0.444s 00:05:39.754 05:55:59 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.754 05:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.754 ************************************ 00:05:39.754 END TEST spdkcli_tcp 00:05:39.754 ************************************ 00:05:39.754 05:55:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.754 05:55:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.754 05:55:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.754 05:55:59 -- common/autotest_common.sh@10 -- # set +x 00:05:40.013 ************************************ 00:05:40.013 START TEST dpdk_mem_utility 00:05:40.013 ************************************ 00:05:40.013 05:55:59 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:40.013 * Looking for test storage... 00:05:40.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:40.013 05:55:59 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.013 05:55:59 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.013 05:55:59 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.013 05:56:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.013 --rc genhtml_branch_coverage=1 00:05:40.013 --rc genhtml_function_coverage=1 00:05:40.013 --rc genhtml_legend=1 00:05:40.013 --rc geninfo_all_blocks=1 00:05:40.013 --rc geninfo_unexecuted_blocks=1 00:05:40.013 00:05:40.013 ' 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.013 --rc genhtml_branch_coverage=1 00:05:40.013 --rc genhtml_function_coverage=1 00:05:40.013 --rc genhtml_legend=1 00:05:40.013 --rc geninfo_all_blocks=1 00:05:40.013 --rc geninfo_unexecuted_blocks=1 00:05:40.013 00:05:40.013 ' 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.013 --rc genhtml_branch_coverage=1 00:05:40.013 --rc genhtml_function_coverage=1 00:05:40.013 --rc genhtml_legend=1 00:05:40.013 --rc geninfo_all_blocks=1 00:05:40.013 --rc geninfo_unexecuted_blocks=1 00:05:40.013 00:05:40.013 ' 00:05:40.013 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.013 --rc genhtml_branch_coverage=1 00:05:40.013 --rc genhtml_function_coverage=1 00:05:40.013 --rc genhtml_legend=1 00:05:40.013 --rc geninfo_all_blocks=1 00:05:40.014 --rc geninfo_unexecuted_blocks=1 00:05:40.014 00:05:40.014 ' 00:05:40.014 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.014 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=779557 00:05:40.014 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 779557 00:05:40.014 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 779557 ']' 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.014 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.014 [2024-12-15 05:56:00.139152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:40.014 [2024-12-15 05:56:00.139205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779557 ] 00:05:40.272 [2024-12-15 05:56:00.214389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.273 [2024-12-15 05:56:00.237078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.532 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.532 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:40.532 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.532 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.532 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.532 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.532 { 00:05:40.532 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.532 } 00:05:40.532 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.532 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.532 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:40.532 1 heaps totaling size 818.000000 MiB 00:05:40.532 size: 818.000000 MiB heap id: 0 00:05:40.532 end heaps---------- 00:05:40.532 9 mempools totaling size 603.782043 MiB 00:05:40.532 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.532 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.532 size: 100.555481 MiB name: bdev_io_779557 00:05:40.532 size: 50.003479 MiB name: msgpool_779557 00:05:40.532 size: 36.509338 MiB name: fsdev_io_779557 00:05:40.532 size: 21.763794 MiB name: PDU_Pool 00:05:40.532 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.532 size: 4.133484 MiB name: evtpool_779557 00:05:40.532 size: 0.026123 MiB name: Session_Pool 00:05:40.532 end mempools------- 00:05:40.532 6 memzones totaling size 4.142822 MiB 00:05:40.532 size: 1.000366 MiB name: RG_ring_0_779557 00:05:40.532 size: 1.000366 MiB name: RG_ring_1_779557 00:05:40.532 size: 1.000366 MiB name: RG_ring_4_779557 00:05:40.532 size: 1.000366 MiB name: RG_ring_5_779557 00:05:40.532 size: 0.125366 MiB name: RG_ring_2_779557 00:05:40.532 size: 0.015991 MiB name: RG_ring_3_779557 00:05:40.532 end memzones------- 00:05:40.532 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.532 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:40.532 list of free elements. size: 10.852478 MiB 00:05:40.532 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:40.532 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:40.532 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:40.532 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:40.532 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:40.532 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:40.532 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:40.532 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:40.532 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:40.532 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:40.532 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:40.532 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:40.532 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:40.532 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:40.532 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:40.532 list of standard malloc elements. size: 199.218628 MiB 00:05:40.532 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:40.532 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:40.532 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.532 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:40.532 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:40.532 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.532 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:40.532 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.532 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:40.532 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:40.532 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:40.532 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:40.533 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:40.533 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:40.533 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:40.533 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:40.533 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:40.533 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:40.533 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:40.533 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:40.533 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:40.533 list of memzone associated elements. size: 607.928894 MiB 00:05:40.533 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:40.533 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.533 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:40.533 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.533 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:40.533 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_779557_0 00:05:40.533 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:40.533 associated memzone info: size: 48.002930 MiB name: MP_msgpool_779557_0 00:05:40.533 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:40.533 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_779557_0 00:05:40.533 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:40.533 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.533 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:40.533 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.533 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:40.533 associated memzone info: size: 3.000122 MiB name: MP_evtpool_779557_0 00:05:40.533 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:40.533 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_779557 00:05:40.533 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.533 associated memzone info: size: 1.007996 MiB name: MP_evtpool_779557 00:05:40.533 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:40.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.533 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:40.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.533 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:40.533 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.533 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:40.533 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.533 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:40.533 associated memzone info: size: 1.000366 MiB name: RG_ring_0_779557 00:05:40.533 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:40.533 associated memzone info: size: 1.000366 MiB name: RG_ring_1_779557 00:05:40.533 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:40.533 associated memzone info: size: 1.000366 MiB name: RG_ring_4_779557 00:05:40.533 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:40.533 associated memzone info: size: 1.000366 MiB name: RG_ring_5_779557 00:05:40.533 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:40.533 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_779557 00:05:40.533 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:40.533 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_779557 00:05:40.533 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:40.533 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.533 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:40.533 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.533 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:40.533 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.533 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:40.533 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_779557 00:05:40.533 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:40.533 associated memzone info: size: 0.125366 MiB name: RG_ring_2_779557 00:05:40.533 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:40.533 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.533 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:40.533 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.533 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:40.533 associated memzone info: size: 0.015991 MiB name: RG_ring_3_779557 00:05:40.533 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:40.533 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.533 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:40.533 associated memzone info: size: 0.000183 MiB name: MP_msgpool_779557 00:05:40.533 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:40.533 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_779557 00:05:40.533 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:40.533 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_779557 00:05:40.533 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:40.533 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.533 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.533 05:56:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 779557 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 779557 ']' 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 779557 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779557 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779557' 00:05:40.533 killing process with pid 779557 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 779557 00:05:40.533 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 779557 00:05:40.792 00:05:40.792 real 0m0.998s 00:05:40.792 user 0m0.932s 00:05:40.792 sys 0m0.418s 00:05:40.792 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.792 05:56:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 ************************************ 00:05:40.792 END TEST dpdk_mem_utility 00:05:40.792 ************************************ 00:05:41.051 05:56:00 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:41.051 05:56:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.051 05:56:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.051 05:56:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.051 ************************************ 00:05:41.051 START TEST event 00:05:41.051 ************************************ 00:05:41.051 05:56:00 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:41.051 * Looking for test storage... 00:05:41.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.051 05:56:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.051 05:56:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.051 05:56:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.051 05:56:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.051 05:56:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.051 05:56:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.051 05:56:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.051 05:56:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.051 05:56:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.051 05:56:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.051 05:56:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.051 05:56:01 event -- scripts/common.sh@344 -- # case "$op" in 00:05:41.051 05:56:01 event -- scripts/common.sh@345 -- # : 1 00:05:41.051 05:56:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.051 05:56:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.051 05:56:01 event -- scripts/common.sh@365 -- # decimal 1 00:05:41.051 05:56:01 event -- scripts/common.sh@353 -- # local d=1 00:05:41.051 05:56:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.051 05:56:01 event -- scripts/common.sh@355 -- # echo 1 00:05:41.051 05:56:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.051 05:56:01 event -- scripts/common.sh@366 -- # decimal 2 00:05:41.051 05:56:01 event -- scripts/common.sh@353 -- # local d=2 00:05:41.051 05:56:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.051 05:56:01 event -- scripts/common.sh@355 -- # echo 2 00:05:41.051 05:56:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.051 05:56:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.051 05:56:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.051 05:56:01 event -- scripts/common.sh@368 -- # return 0 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.051 --rc genhtml_branch_coverage=1 00:05:41.051 --rc genhtml_function_coverage=1 00:05:41.051 --rc genhtml_legend=1 00:05:41.051 --rc geninfo_all_blocks=1 00:05:41.051 --rc geninfo_unexecuted_blocks=1 00:05:41.051 00:05:41.051 ' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.051 --rc genhtml_branch_coverage=1 00:05:41.051 --rc genhtml_function_coverage=1 00:05:41.051 --rc genhtml_legend=1 00:05:41.051 --rc geninfo_all_blocks=1 00:05:41.051 --rc geninfo_unexecuted_blocks=1 00:05:41.051 00:05:41.051 ' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.051 --rc genhtml_branch_coverage=1 00:05:41.051 --rc genhtml_function_coverage=1 00:05:41.051 --rc genhtml_legend=1 00:05:41.051 --rc geninfo_all_blocks=1 00:05:41.051 --rc geninfo_unexecuted_blocks=1 00:05:41.051 00:05:41.051 ' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.051 --rc genhtml_branch_coverage=1 00:05:41.051 --rc genhtml_function_coverage=1 00:05:41.051 --rc genhtml_legend=1 00:05:41.051 --rc geninfo_all_blocks=1 00:05:41.051 --rc geninfo_unexecuted_blocks=1 00:05:41.051 00:05:41.051 ' 00:05:41.051 05:56:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:41.051 05:56:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.051 05:56:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:41.051 05:56:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.051 05:56:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.051 ************************************ 00:05:41.051 START TEST event_perf 00:05:41.051 ************************************ 00:05:41.051 05:56:01 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.309 Running I/O for 1 seconds...[2024-12-15 05:56:01.202537] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:41.309 [2024-12-15 05:56:01.202606] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779704 ] 00:05:41.309 [2024-12-15 05:56:01.283006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.309 [2024-12-15 05:56:01.309135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.309 [2024-12-15 05:56:01.309247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.309 [2024-12-15 05:56:01.309353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.309 Running I/O for 1 seconds...[2024-12-15 05:56:01.309353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.242 00:05:42.242 lcore 0: 203691 00:05:42.242 lcore 1: 203691 00:05:42.242 lcore 2: 203692 00:05:42.242 lcore 3: 203692 00:05:42.242 done. 00:05:42.242 00:05:42.242 real 0m1.164s 00:05:42.242 user 0m4.088s 00:05:42.242 sys 0m0.075s 00:05:42.242 05:56:02 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.242 05:56:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.242 ************************************ 00:05:42.242 END TEST event_perf 00:05:42.242 ************************************ 00:05:42.242 05:56:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.242 05:56:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.242 05:56:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.501 05:56:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.501 ************************************ 00:05:42.501 START TEST event_reactor 00:05:42.501 ************************************ 00:05:42.501 05:56:02 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.501 [2024-12-15 05:56:02.435975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:42.501 [2024-12-15 05:56:02.436053] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779892 ] 00:05:42.501 [2024-12-15 05:56:02.516064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.501 [2024-12-15 05:56:02.538217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.437 test_start 00:05:43.437 oneshot 00:05:43.437 tick 100 00:05:43.437 tick 100 00:05:43.437 tick 250 00:05:43.437 tick 100 00:05:43.437 tick 100 00:05:43.437 tick 250 00:05:43.437 tick 100 00:05:43.437 tick 500 00:05:43.437 tick 100 00:05:43.437 tick 100 00:05:43.437 tick 250 00:05:43.437 tick 100 00:05:43.437 tick 100 00:05:43.437 test_end 00:05:43.437 00:05:43.437 real 0m1.157s 00:05:43.437 user 0m1.081s 00:05:43.437 sys 0m0.072s 00:05:43.437 05:56:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.437 05:56:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:43.437 ************************************ 00:05:43.437 END TEST event_reactor 00:05:43.437 ************************************ 00:05:43.696 05:56:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.696 05:56:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:43.696 05:56:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.696 05:56:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.696 ************************************ 00:05:43.696 START TEST event_reactor_perf 00:05:43.696 ************************************ 00:05:43.696 05:56:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.696 [2024-12-15 05:56:03.660502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:43.696 [2024-12-15 05:56:03.660569] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780133 ] 00:05:43.696 [2024-12-15 05:56:03.737842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.696 [2024-12-15 05:56:03.759279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.074 test_start 00:05:45.074 test_end 00:05:45.074 Performance: 520853 events per second 00:05:45.074 00:05:45.074 real 0m1.150s 00:05:45.074 user 0m1.082s 00:05:45.074 sys 0m0.064s 00:05:45.074 05:56:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.074 05:56:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.074 ************************************ 00:05:45.074 END TEST event_reactor_perf 00:05:45.074 ************************************ 00:05:45.074 05:56:04 event -- event/event.sh@49 -- # uname -s 00:05:45.074 05:56:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.074 05:56:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.074 05:56:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.074 05:56:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.074 05:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.074 ************************************ 00:05:45.074 START TEST event_scheduler 00:05:45.074 ************************************ 00:05:45.074 05:56:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.074 * Looking for test storage... 00:05:45.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:45.074 05:56:04 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.074 05:56:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.074 05:56:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.074 05:56:05 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.074 05:56:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:45.074 05:56:05 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.074 05:56:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.075 --rc genhtml_branch_coverage=1 00:05:45.075 --rc genhtml_function_coverage=1 00:05:45.075 --rc genhtml_legend=1 00:05:45.075 --rc geninfo_all_blocks=1 00:05:45.075 --rc geninfo_unexecuted_blocks=1 00:05:45.075 00:05:45.075 ' 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.075 --rc genhtml_branch_coverage=1 00:05:45.075 --rc genhtml_function_coverage=1 00:05:45.075 --rc genhtml_legend=1 00:05:45.075 --rc geninfo_all_blocks=1 00:05:45.075 --rc geninfo_unexecuted_blocks=1 00:05:45.075 00:05:45.075 ' 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.075 --rc genhtml_branch_coverage=1 00:05:45.075 --rc genhtml_function_coverage=1 00:05:45.075 --rc genhtml_legend=1 00:05:45.075 --rc geninfo_all_blocks=1 00:05:45.075 --rc geninfo_unexecuted_blocks=1 00:05:45.075 00:05:45.075 ' 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.075 --rc genhtml_branch_coverage=1 00:05:45.075 --rc genhtml_function_coverage=1 00:05:45.075 --rc genhtml_legend=1 00:05:45.075 --rc geninfo_all_blocks=1 00:05:45.075 --rc geninfo_unexecuted_blocks=1 00:05:45.075 00:05:45.075 ' 00:05:45.075 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.075 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=780410 00:05:45.075 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.075 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.075 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 780410 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 780410 ']' 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.075 05:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.075 [2024-12-15 05:56:05.084690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:45.075 [2024-12-15 05:56:05.084740] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780410 ] 00:05:45.075 [2024-12-15 05:56:05.157150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.075 [2024-12-15 05:56:05.182662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.075 [2024-12-15 05:56:05.182774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.075 [2024-12-15 05:56:05.182856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.075 [2024-12-15 05:56:05.182858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:45.334 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 [2024-12-15 05:56:05.251848] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:45.334 [2024-12-15 05:56:05.251867] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.334 [2024-12-15 05:56:05.251876] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.334 [2024-12-15 05:56:05.251882] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.334 [2024-12-15 05:56:05.251887] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 [2024-12-15 05:56:05.326058] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 ************************************ 00:05:45.334 START TEST scheduler_create_thread 00:05:45.334 ************************************ 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 2 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 3 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 4 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 5 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 6 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 7 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 8 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 9 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 10 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.334 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.592 05:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.592 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.592 05:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.969 05:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.969 05:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.969 05:56:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.969 05:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.969 05:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.904 05:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.904 00:05:47.904 real 0m2.619s 00:05:47.904 user 0m0.028s 00:05:47.904 sys 0m0.002s 00:05:47.904 05:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.904 05:56:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.904 ************************************ 00:05:47.904 END TEST scheduler_create_thread 00:05:47.904 ************************************ 00:05:47.904 05:56:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:47.904 05:56:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 780410 00:05:47.904 05:56:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 780410 ']' 00:05:47.904 05:56:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 780410 00:05:47.904 05:56:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:47.904 05:56:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.904 05:56:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780410 00:05:48.163 05:56:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.163 05:56:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.163 05:56:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780410' 00:05:48.163 killing process with pid 780410 00:05:48.163 05:56:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 780410 00:05:48.163 05:56:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 780410 00:05:48.422 [2024-12-15 05:56:08.464075] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.681 00:05:48.681 real 0m3.763s 00:05:48.681 user 0m5.700s 00:05:48.681 sys 0m0.379s 00:05:48.681 05:56:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.681 05:56:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.681 ************************************ 00:05:48.681 END TEST event_scheduler 00:05:48.681 ************************************ 00:05:48.681 05:56:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.681 05:56:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.681 05:56:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.681 05:56:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.681 05:56:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.681 ************************************ 00:05:48.681 START TEST app_repeat 00:05:48.681 ************************************ 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=781144 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 781144' 00:05:48.681 Process app_repeat pid: 781144 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.681 spdk_app_start Round 0 00:05:48.681 05:56:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781144 /var/tmp/spdk-nbd.sock 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781144 ']' 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.681 05:56:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.681 [2024-12-15 05:56:08.739032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:48.681 [2024-12-15 05:56:08.739082] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781144 ] 00:05:48.681 [2024-12-15 05:56:08.813401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.940 [2024-12-15 05:56:08.837919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.940 [2024-12-15 05:56:08.837922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.940 05:56:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.940 05:56:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.940 05:56:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.199 Malloc0 00:05:49.199 05:56:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.199 Malloc1 00:05:49.199 05:56:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.199 05:56:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.458 /dev/nbd0 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.458 1+0 records in 00:05:49.458 1+0 records out 00:05:49.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233227 s, 17.6 MB/s 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.458 05:56:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.458 05:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.717 /dev/nbd1 00:05:49.717 05:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.717 05:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.717 1+0 records in 00:05:49.717 1+0 records out 00:05:49.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193809 s, 21.1 MB/s 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.717 05:56:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.717 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.717 05:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.718 05:56:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.718 05:56:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.718 05:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.976 { 00:05:49.976 "nbd_device": "/dev/nbd0", 00:05:49.976 "bdev_name": "Malloc0" 00:05:49.976 }, 00:05:49.976 { 00:05:49.976 "nbd_device": "/dev/nbd1", 00:05:49.976 "bdev_name": "Malloc1" 00:05:49.976 } 00:05:49.976 ]' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.976 { 00:05:49.976 "nbd_device": "/dev/nbd0", 00:05:49.976 "bdev_name": "Malloc0" 00:05:49.976 }, 00:05:49.976 { 00:05:49.976 "nbd_device": "/dev/nbd1", 00:05:49.976 "bdev_name": "Malloc1" 00:05:49.976 } 00:05:49.976 ]' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.976 /dev/nbd1' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.976 /dev/nbd1' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.976 256+0 records in 00:05:49.976 256+0 records out 00:05:49.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101051 s, 104 MB/s 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.976 256+0 records in 00:05:49.976 256+0 records out 00:05:49.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140468 s, 74.6 MB/s 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.976 05:56:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.235 256+0 records in 00:05:50.235 256+0 records out 00:05:50.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153351 s, 68.4 MB/s 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.235 05:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.494 05:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.753 05:56:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.753 05:56:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.012 05:56:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.270 [2024-12-15 05:56:11.180975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.270 [2024-12-15 05:56:11.200765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.270 [2024-12-15 05:56:11.200765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.270 [2024-12-15 05:56:11.241234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.270 [2024-12-15 05:56:11.241272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:54.554 spdk_app_start Round 1 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781144 /var/tmp/spdk-nbd.sock 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781144 ']' 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.554 05:56:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.554 Malloc0 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.554 Malloc1 00:05:54.554 05:56:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.554 05:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.813 /dev/nbd0 00:05:54.813 05:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.813 05:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.813 1+0 records in 00:05:54.813 1+0 records out 00:05:54.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189611 s, 21.6 MB/s 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.813 05:56:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.813 05:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.813 05:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.813 05:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.072 /dev/nbd1 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.072 1+0 records in 00:05:55.072 1+0 records out 00:05:55.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247093 s, 16.6 MB/s 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.072 05:56:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.072 05:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.330 { 00:05:55.330 "nbd_device": "/dev/nbd0", 00:05:55.330 "bdev_name": "Malloc0" 00:05:55.330 }, 00:05:55.330 { 00:05:55.330 "nbd_device": "/dev/nbd1", 00:05:55.330 "bdev_name": "Malloc1" 00:05:55.330 } 00:05:55.330 ]' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.330 { 00:05:55.330 "nbd_device": "/dev/nbd0", 00:05:55.330 "bdev_name": "Malloc0" 00:05:55.330 }, 00:05:55.330 { 00:05:55.330 "nbd_device": "/dev/nbd1", 00:05:55.330 "bdev_name": "Malloc1" 00:05:55.330 } 00:05:55.330 ]' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.330 /dev/nbd1' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.330 /dev/nbd1' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.330 05:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.331 256+0 records in 00:05:55.331 256+0 records out 00:05:55.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00989587 s, 106 MB/s 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.331 256+0 records in 00:05:55.331 256+0 records out 00:05:55.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133431 s, 78.6 MB/s 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.331 256+0 records in 00:05:55.331 256+0 records out 00:05:55.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015065 s, 69.6 MB/s 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.331 05:56:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.589 05:56:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.848 05:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.106 05:56:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.107 05:56:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.107 05:56:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.365 05:56:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.365 [2024-12-15 05:56:16.461125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.365 [2024-12-15 05:56:16.480699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.365 [2024-12-15 05:56:16.480700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.623 [2024-12-15 05:56:16.521759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.623 [2024-12-15 05:56:16.521794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.910 spdk_app_start Round 2 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781144 /var/tmp/spdk-nbd.sock 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781144 ']' 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.910 05:56:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.910 Malloc0 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.910 Malloc1 00:05:59.910 05:56:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.910 05:56:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.169 /dev/nbd0 00:06:00.169 05:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.169 05:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.169 1+0 records in 00:06:00.169 1+0 records out 00:06:00.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022299 s, 18.4 MB/s 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.169 05:56:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.169 05:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.169 05:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.169 05:56:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.428 /dev/nbd1 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.428 1+0 records in 00:06:00.428 1+0 records out 00:06:00.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207302 s, 19.8 MB/s 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.428 05:56:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.428 05:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.687 05:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.687 { 00:06:00.687 "nbd_device": "/dev/nbd0", 00:06:00.687 "bdev_name": "Malloc0" 00:06:00.687 }, 00:06:00.687 { 00:06:00.687 "nbd_device": "/dev/nbd1", 00:06:00.687 "bdev_name": "Malloc1" 00:06:00.687 } 00:06:00.687 ]' 00:06:00.687 05:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.687 { 00:06:00.687 "nbd_device": "/dev/nbd0", 00:06:00.687 "bdev_name": "Malloc0" 00:06:00.687 }, 00:06:00.687 { 00:06:00.687 "nbd_device": "/dev/nbd1", 00:06:00.687 "bdev_name": "Malloc1" 00:06:00.687 } 00:06:00.687 ]' 00:06:00.687 05:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.688 /dev/nbd1' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.688 /dev/nbd1' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.688 256+0 records in 00:06:00.688 256+0 records out 00:06:00.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107187 s, 97.8 MB/s 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.688 256+0 records in 00:06:00.688 256+0 records out 00:06:00.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138536 s, 75.7 MB/s 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.688 256+0 records in 00:06:00.688 256+0 records out 00:06:00.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148103 s, 70.8 MB/s 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.688 05:56:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.947 05:56:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.205 05:56:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.464 05:56:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.464 05:56:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.723 05:56:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.723 [2024-12-15 05:56:21.815786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.723 [2024-12-15 05:56:21.835762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.723 [2024-12-15 05:56:21.835764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.981 [2024-12-15 05:56:21.876398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.981 [2024-12-15 05:56:21.876434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.267 05:56:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 781144 /var/tmp/spdk-nbd.sock 00:06:05.267 05:56:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781144 ']' 00:06:05.267 05:56:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.267 05:56:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.267 05:56:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.268 05:56:24 event.app_repeat -- event/event.sh@39 -- # killprocess 781144 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 781144 ']' 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 781144 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781144 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781144' 00:06:05.268 killing process with pid 781144 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 781144 00:06:05.268 05:56:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 781144 00:06:05.268 spdk_app_start is called in Round 0. 00:06:05.268 Shutdown signal received, stop current app iteration 00:06:05.268 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:05.268 spdk_app_start is called in Round 1. 00:06:05.268 Shutdown signal received, stop current app iteration 00:06:05.268 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:05.268 spdk_app_start is called in Round 2. 00:06:05.268 Shutdown signal received, stop current app iteration 00:06:05.268 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:05.268 spdk_app_start is called in Round 3. 00:06:05.268 Shutdown signal received, stop current app iteration 00:06:05.268 05:56:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.268 05:56:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.268 00:06:05.268 real 0m16.365s 00:06:05.268 user 0m36.060s 00:06:05.268 sys 0m2.525s 00:06:05.268 05:56:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.268 05:56:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.268 ************************************ 00:06:05.268 END TEST app_repeat 00:06:05.268 ************************************ 00:06:05.268 05:56:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.268 05:56:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.268 05:56:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.268 05:56:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.268 05:56:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.268 ************************************ 00:06:05.268 START TEST cpu_locks 00:06:05.268 ************************************ 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.268 * Looking for test storage... 00:06:05.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.268 05:56:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.268 --rc genhtml_branch_coverage=1 00:06:05.268 --rc genhtml_function_coverage=1 00:06:05.268 --rc genhtml_legend=1 00:06:05.268 --rc geninfo_all_blocks=1 00:06:05.268 --rc geninfo_unexecuted_blocks=1 00:06:05.268 00:06:05.268 ' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.268 --rc genhtml_branch_coverage=1 00:06:05.268 --rc genhtml_function_coverage=1 00:06:05.268 --rc genhtml_legend=1 00:06:05.268 --rc geninfo_all_blocks=1 00:06:05.268 --rc geninfo_unexecuted_blocks=1 00:06:05.268 00:06:05.268 ' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.268 --rc genhtml_branch_coverage=1 00:06:05.268 --rc genhtml_function_coverage=1 00:06:05.268 --rc genhtml_legend=1 00:06:05.268 --rc geninfo_all_blocks=1 00:06:05.268 --rc geninfo_unexecuted_blocks=1 00:06:05.268 00:06:05.268 ' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.268 --rc genhtml_branch_coverage=1 00:06:05.268 --rc genhtml_function_coverage=1 00:06:05.268 --rc genhtml_legend=1 00:06:05.268 --rc geninfo_all_blocks=1 00:06:05.268 --rc geninfo_unexecuted_blocks=1 00:06:05.268 00:06:05.268 ' 00:06:05.268 05:56:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.268 05:56:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.268 05:56:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.268 05:56:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.268 05:56:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.268 ************************************ 00:06:05.268 START TEST default_locks 00:06:05.268 ************************************ 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=784065 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 784065 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784065 ']' 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.268 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.268 [2024-12-15 05:56:25.403405] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:05.268 [2024-12-15 05:56:25.403450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784065 ] 00:06:05.527 [2024-12-15 05:56:25.478801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.527 [2024-12-15 05:56:25.501101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 784065 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 784065 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.786 lslocks: write error 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 784065 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 784065 ']' 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 784065 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.786 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784065 00:06:06.045 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.045 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.045 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784065' 00:06:06.045 killing process with pid 784065 00:06:06.045 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 784065 00:06:06.045 05:56:25 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 784065 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 784065 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784065 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 784065 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784065 ']' 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784065) - No such process 00:06:06.304 ERROR: process (pid: 784065) is no longer running 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.304 00:06:06.304 real 0m0.897s 00:06:06.304 user 0m0.845s 00:06:06.304 sys 0m0.426s 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.304 05:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 ************************************ 00:06:06.304 END TEST default_locks 00:06:06.304 ************************************ 00:06:06.304 05:56:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.304 05:56:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.304 05:56:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.304 05:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 ************************************ 00:06:06.304 START TEST default_locks_via_rpc 00:06:06.304 ************************************ 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=784317 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 784317 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784317 ']' 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.304 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.304 [2024-12-15 05:56:26.373732] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.304 [2024-12-15 05:56:26.373776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784317 ] 00:06:06.563 [2024-12-15 05:56:26.445296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.563 [2024-12-15 05:56:26.465178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.563 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.563 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.563 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.563 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.563 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 784317 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 784317 00:06:06.564 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 784317 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 784317 ']' 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 784317 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.131 05:56:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784317 00:06:07.131 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.131 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.131 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784317' 00:06:07.131 killing process with pid 784317 00:06:07.131 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 784317 00:06:07.131 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 784317 00:06:07.390 00:06:07.390 real 0m1.009s 00:06:07.390 user 0m0.971s 00:06:07.390 sys 0m0.469s 00:06:07.390 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.390 05:56:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.390 ************************************ 00:06:07.390 END TEST default_locks_via_rpc 00:06:07.390 ************************************ 00:06:07.390 05:56:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.390 05:56:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.390 05:56:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.390 05:56:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.390 ************************************ 00:06:07.390 START TEST non_locking_app_on_locked_coremask 00:06:07.390 ************************************ 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=784565 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 784565 /var/tmp/spdk.sock 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784565 ']' 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.391 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.391 [2024-12-15 05:56:27.455236] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:07.391 [2024-12-15 05:56:27.455287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784565 ] 00:06:07.649 [2024-12-15 05:56:27.529174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.649 [2024-12-15 05:56:27.551890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=784572 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 784572 /var/tmp/spdk2.sock 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784572 ']' 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.649 05:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.908 [2024-12-15 05:56:27.800126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:07.908 [2024-12-15 05:56:27.800176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784572 ] 00:06:07.908 [2024-12-15 05:56:27.889050] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.908 [2024-12-15 05:56:27.889076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.908 [2024-12-15 05:56:27.932628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.847 05:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.847 05:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.847 05:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 784565 00:06:08.847 05:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784565 00:06:08.847 05:56:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.414 lslocks: write error 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 784565 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784565 ']' 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784565 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784565 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784565' 00:06:09.414 killing process with pid 784565 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784565 00:06:09.414 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784565 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 784572 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784572 ']' 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784572 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784572 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784572' 00:06:09.982 killing process with pid 784572 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784572 00:06:09.982 05:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784572 00:06:10.240 00:06:10.240 real 0m2.823s 00:06:10.240 user 0m2.988s 00:06:10.240 sys 0m0.967s 00:06:10.240 05:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.240 05:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.240 ************************************ 00:06:10.240 END TEST non_locking_app_on_locked_coremask 00:06:10.240 ************************************ 00:06:10.240 05:56:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:10.240 05:56:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.240 05:56:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.240 05:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.240 ************************************ 00:06:10.240 START TEST locking_app_on_unlocked_coremask 00:06:10.240 ************************************ 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=785058 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 785058 /var/tmp/spdk.sock 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785058 ']' 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.240 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.240 [2024-12-15 05:56:30.344749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.240 [2024-12-15 05:56:30.344799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785058 ] 00:06:10.499 [2024-12-15 05:56:30.417199] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.499 [2024-12-15 05:56:30.417223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.499 [2024-12-15 05:56:30.437466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=785061 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 785061 /var/tmp/spdk2.sock 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785061 ']' 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.757 05:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.757 [2024-12-15 05:56:30.696779] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.757 [2024-12-15 05:56:30.696825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785061 ] 00:06:10.757 [2024-12-15 05:56:30.783643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.757 [2024-12-15 05:56:30.829780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.692 05:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.693 05:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.693 05:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 785061 00:06:11.693 05:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785061 00:06:11.693 05:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.629 lslocks: write error 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 785058 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785058 ']' 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785058 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785058 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785058' 00:06:12.629 killing process with pid 785058 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785058 00:06:12.629 05:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785058 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 785061 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785061 ']' 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785061 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785061 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785061' 00:06:13.197 killing process with pid 785061 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785061 00:06:13.197 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785061 00:06:13.457 00:06:13.457 real 0m3.206s 00:06:13.457 user 0m3.364s 00:06:13.457 sys 0m1.108s 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 END TEST locking_app_on_unlocked_coremask 00:06:13.457 ************************************ 00:06:13.457 05:56:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.457 05:56:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.457 05:56:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.457 05:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.457 ************************************ 00:06:13.457 START TEST locking_app_on_locked_coremask 00:06:13.457 ************************************ 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=785550 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 785550 /var/tmp/spdk.sock 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785550 ']' 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.457 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.716 [2024-12-15 05:56:33.619080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:13.716 [2024-12-15 05:56:33.619124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785550 ] 00:06:13.716 [2024-12-15 05:56:33.691442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.716 [2024-12-15 05:56:33.713173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=785673 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 785673 /var/tmp/spdk2.sock 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785673 /var/tmp/spdk2.sock 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785673 /var/tmp/spdk2.sock 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785673 ']' 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.975 05:56:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 [2024-12-15 05:56:33.977910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:13.975 [2024-12-15 05:56:33.977961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785673 ] 00:06:13.975 [2024-12-15 05:56:34.063730] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 785550 has claimed it. 00:06:13.975 [2024-12-15 05:56:34.063773] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:14.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785673) - No such process 00:06:14.542 ERROR: process (pid: 785673) is no longer running 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 785550 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785550 00:06:14.542 05:56:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.111 lslocks: write error 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 785550 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785550 ']' 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785550 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785550 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785550' 00:06:15.111 killing process with pid 785550 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785550 00:06:15.111 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785550 00:06:15.370 00:06:15.370 real 0m1.915s 00:06:15.370 user 0m2.056s 00:06:15.370 sys 0m0.655s 00:06:15.370 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.370 05:56:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.370 ************************************ 00:06:15.370 END TEST locking_app_on_locked_coremask 00:06:15.370 ************************************ 00:06:15.628 05:56:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.628 05:56:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.628 05:56:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.628 05:56:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.628 ************************************ 00:06:15.628 START TEST locking_overlapped_coremask 00:06:15.628 ************************************ 00:06:15.628 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:15.628 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=786019 00:06:15.628 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 786019 /var/tmp/spdk.sock 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786019 ']' 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.629 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.629 [2024-12-15 05:56:35.604723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:15.629 [2024-12-15 05:56:35.604762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786019 ] 00:06:15.629 [2024-12-15 05:56:35.677966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.629 [2024-12-15 05:56:35.703000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.629 [2024-12-15 05:56:35.703105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.629 [2024-12-15 05:56:35.703105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=786029 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 786029 /var/tmp/spdk2.sock 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 786029 /var/tmp/spdk2.sock 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 786029 /var/tmp/spdk2.sock 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786029 ']' 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.887 05:56:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.887 [2024-12-15 05:56:35.952555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:15.887 [2024-12-15 05:56:35.952603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786029 ] 00:06:16.146 [2024-12-15 05:56:36.042716] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786019 has claimed it. 00:06:16.146 [2024-12-15 05:56:36.042754] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (786029) - No such process 00:06:16.713 ERROR: process (pid: 786029) is no longer running 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 786019 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 786019 ']' 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 786019 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786019 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786019' 00:06:16.713 killing process with pid 786019 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 786019 00:06:16.713 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 786019 00:06:16.972 00:06:16.972 real 0m1.394s 00:06:16.972 user 0m3.895s 00:06:16.972 sys 0m0.393s 00:06:16.972 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.972 05:56:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 ************************************ 00:06:16.972 END TEST locking_overlapped_coremask 00:06:16.972 ************************************ 00:06:16.972 05:56:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.972 05:56:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.972 05:56:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.972 05:56:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 ************************************ 00:06:16.972 START TEST locking_overlapped_coremask_via_rpc 00:06:16.972 ************************************ 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=786275 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 786275 /var/tmp/spdk.sock 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786275 ']' 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.972 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 [2024-12-15 05:56:37.069835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:16.972 [2024-12-15 05:56:37.069886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786275 ] 00:06:17.231 [2024-12-15 05:56:37.142582] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.231 [2024-12-15 05:56:37.142606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.231 [2024-12-15 05:56:37.168010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.231 [2024-12-15 05:56:37.168047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.231 [2024-12-15 05:56:37.168048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=786291 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 786291 /var/tmp/spdk2.sock 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786291 ']' 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.490 05:56:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.490 [2024-12-15 05:56:37.420153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:17.490 [2024-12-15 05:56:37.420198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786291 ] 00:06:17.490 [2024-12-15 05:56:37.511364] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.490 [2024-12-15 05:56:37.511391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.490 [2024-12-15 05:56:37.560050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.490 [2024-12-15 05:56:37.560168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.490 [2024-12-15 05:56:37.560169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.424 [2024-12-15 05:56:38.278062] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786275 has claimed it. 00:06:18.424 request: 00:06:18.424 { 00:06:18.424 "method": "framework_enable_cpumask_locks", 00:06:18.424 "req_id": 1 00:06:18.424 } 00:06:18.424 Got JSON-RPC error response 00:06:18.424 response: 00:06:18.424 { 00:06:18.424 "code": -32603, 00:06:18.424 "message": "Failed to claim CPU core: 2" 00:06:18.424 } 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 786275 /var/tmp/spdk.sock 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786275 ']' 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 786291 /var/tmp/spdk2.sock 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786291 ']' 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.424 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.682 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.682 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.682 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.682 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.683 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.683 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.683 00:06:18.683 real 0m1.667s 00:06:18.683 user 0m0.811s 00:06:18.683 sys 0m0.139s 00:06:18.683 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.683 05:56:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.683 ************************************ 00:06:18.683 END TEST locking_overlapped_coremask_via_rpc 00:06:18.683 ************************************ 00:06:18.683 05:56:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.683 05:56:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786275 ]] 00:06:18.683 05:56:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786275 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786275 ']' 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786275 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786275 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786275' 00:06:18.683 killing process with pid 786275 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786275 00:06:18.683 05:56:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786275 00:06:18.940 05:56:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786291 ]] 00:06:18.940 05:56:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786291 00:06:18.940 05:56:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786291 ']' 00:06:18.940 05:56:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786291 00:06:18.940 05:56:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786291 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786291' 00:06:19.198 killing process with pid 786291 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786291 00:06:19.198 05:56:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786291 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786275 ]] 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786275 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786275 ']' 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786275 00:06:19.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786275) - No such process 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786275 is not found' 00:06:19.457 Process with pid 786275 is not found 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786291 ]] 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786291 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786291 ']' 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786291 00:06:19.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786291) - No such process 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786291 is not found' 00:06:19.457 Process with pid 786291 is not found 00:06:19.457 05:56:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.457 00:06:19.457 real 0m14.295s 00:06:19.457 user 0m24.596s 00:06:19.457 sys 0m5.122s 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.457 05:56:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.457 ************************************ 00:06:19.457 END TEST cpu_locks 00:06:19.457 ************************************ 00:06:19.457 00:06:19.457 real 0m38.494s 00:06:19.457 user 1m12.890s 00:06:19.457 sys 0m8.594s 00:06:19.457 05:56:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.457 05:56:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.458 ************************************ 00:06:19.458 END TEST event 00:06:19.458 ************************************ 00:06:19.458 05:56:39 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.458 05:56:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.458 05:56:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.458 05:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:19.458 ************************************ 00:06:19.458 START TEST thread 00:06:19.458 ************************************ 00:06:19.458 05:56:39 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:19.717 * Looking for test storage... 00:06:19.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.717 05:56:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.717 05:56:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.717 05:56:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.717 05:56:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.717 05:56:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.717 05:56:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.717 05:56:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.717 05:56:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.717 05:56:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.717 05:56:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.717 05:56:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.717 05:56:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:19.717 05:56:39 thread -- scripts/common.sh@345 -- # : 1 00:06:19.717 05:56:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.717 05:56:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.717 05:56:39 thread -- scripts/common.sh@365 -- # decimal 1 00:06:19.717 05:56:39 thread -- scripts/common.sh@353 -- # local d=1 00:06:19.717 05:56:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.717 05:56:39 thread -- scripts/common.sh@355 -- # echo 1 00:06:19.717 05:56:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.717 05:56:39 thread -- scripts/common.sh@366 -- # decimal 2 00:06:19.717 05:56:39 thread -- scripts/common.sh@353 -- # local d=2 00:06:19.717 05:56:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.717 05:56:39 thread -- scripts/common.sh@355 -- # echo 2 00:06:19.717 05:56:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.717 05:56:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.717 05:56:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.717 05:56:39 thread -- scripts/common.sh@368 -- # return 0 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.717 --rc genhtml_branch_coverage=1 00:06:19.717 --rc genhtml_function_coverage=1 00:06:19.717 --rc genhtml_legend=1 00:06:19.717 --rc geninfo_all_blocks=1 00:06:19.717 --rc geninfo_unexecuted_blocks=1 00:06:19.717 00:06:19.717 ' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.717 --rc genhtml_branch_coverage=1 00:06:19.717 --rc genhtml_function_coverage=1 00:06:19.717 --rc genhtml_legend=1 00:06:19.717 --rc geninfo_all_blocks=1 00:06:19.717 --rc geninfo_unexecuted_blocks=1 00:06:19.717 00:06:19.717 ' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.717 --rc genhtml_branch_coverage=1 00:06:19.717 --rc genhtml_function_coverage=1 00:06:19.717 --rc genhtml_legend=1 00:06:19.717 --rc geninfo_all_blocks=1 00:06:19.717 --rc geninfo_unexecuted_blocks=1 00:06:19.717 00:06:19.717 ' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.717 --rc genhtml_branch_coverage=1 00:06:19.717 --rc genhtml_function_coverage=1 00:06:19.717 --rc genhtml_legend=1 00:06:19.717 --rc geninfo_all_blocks=1 00:06:19.717 --rc geninfo_unexecuted_blocks=1 00:06:19.717 00:06:19.717 ' 00:06:19.717 05:56:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.717 05:56:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.717 ************************************ 00:06:19.717 START TEST thread_poller_perf 00:06:19.717 ************************************ 00:06:19.717 05:56:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.717 [2024-12-15 05:56:39.765679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:19.717 [2024-12-15 05:56:39.765738] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786838 ] 00:06:19.717 [2024-12-15 05:56:39.841806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.976 [2024-12-15 05:56:39.864258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.976 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.913 [2024-12-15T04:56:41.053Z] ====================================== 00:06:20.913 [2024-12-15T04:56:41.053Z] busy:2105766818 (cyc) 00:06:20.913 [2024-12-15T04:56:41.053Z] total_run_count: 407000 00:06:20.913 [2024-12-15T04:56:41.053Z] tsc_hz: 2100000000 (cyc) 00:06:20.913 [2024-12-15T04:56:41.053Z] ====================================== 00:06:20.913 [2024-12-15T04:56:41.053Z] poller_cost: 5173 (cyc), 2463 (nsec) 00:06:20.913 00:06:20.913 real 0m1.162s 00:06:20.913 user 0m1.077s 00:06:20.913 sys 0m0.081s 00:06:20.913 05:56:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.913 05:56:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.913 ************************************ 00:06:20.913 END TEST thread_poller_perf 00:06:20.913 ************************************ 00:06:20.913 05:56:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.913 05:56:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:20.913 05:56:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.913 05:56:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.913 ************************************ 00:06:20.913 START TEST thread_poller_perf 00:06:20.913 ************************************ 00:06:20.913 05:56:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.913 [2024-12-15 05:56:40.996036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:20.913 [2024-12-15 05:56:40.996105] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787083 ] 00:06:21.172 [2024-12-15 05:56:41.073150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.172 [2024-12-15 05:56:41.095468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.172 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:22.107 [2024-12-15T04:56:42.247Z] ====================================== 00:06:22.107 [2024-12-15T04:56:42.247Z] busy:2101580766 (cyc) 00:06:22.107 [2024-12-15T04:56:42.247Z] total_run_count: 5125000 00:06:22.107 [2024-12-15T04:56:42.247Z] tsc_hz: 2100000000 (cyc) 00:06:22.107 [2024-12-15T04:56:42.247Z] ====================================== 00:06:22.107 [2024-12-15T04:56:42.247Z] poller_cost: 410 (cyc), 195 (nsec) 00:06:22.107 00:06:22.107 real 0m1.155s 00:06:22.107 user 0m1.080s 00:06:22.107 sys 0m0.071s 00:06:22.107 05:56:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.107 05:56:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.107 ************************************ 00:06:22.107 END TEST thread_poller_perf 00:06:22.107 ************************************ 00:06:22.107 05:56:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.107 00:06:22.107 real 0m2.626s 00:06:22.107 user 0m2.321s 00:06:22.107 sys 0m0.319s 00:06:22.107 05:56:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.107 05:56:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.107 ************************************ 00:06:22.108 END TEST thread 00:06:22.108 ************************************ 00:06:22.108 05:56:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:22.108 05:56:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.108 05:56:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.108 05:56:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.108 05:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:22.108 ************************************ 00:06:22.108 START TEST app_cmdline 00:06:22.108 ************************************ 00:06:22.108 05:56:42 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:22.367 * Looking for test storage... 00:06:22.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.367 05:56:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 05:56:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.367 05:56:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=787372 00:06:22.367 05:56:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 787372 00:06:22.367 05:56:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 787372 ']' 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.367 05:56:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 [2024-12-15 05:56:42.471748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:22.367 [2024-12-15 05:56:42.471794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787372 ] 00:06:22.626 [2024-12-15 05:56:42.546936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.626 [2024-12-15 05:56:42.569570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.626 05:56:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.626 05:56:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:22.626 05:56:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.885 { 00:06:22.885 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:22.885 "fields": { 00:06:22.885 "major": 25, 00:06:22.885 "minor": 1, 00:06:22.885 "patch": 0, 00:06:22.885 "suffix": "-pre", 00:06:22.885 "commit": "e01cb43b8" 00:06:22.885 } 00:06:22.885 } 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.885 05:56:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.885 05:56:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.144 request: 00:06:23.144 { 00:06:23.144 "method": "env_dpdk_get_mem_stats", 00:06:23.144 "req_id": 1 00:06:23.144 } 00:06:23.144 Got JSON-RPC error response 00:06:23.144 response: 00:06:23.144 { 00:06:23.144 "code": -32601, 00:06:23.144 "message": "Method not found" 00:06:23.144 } 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.144 05:56:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 787372 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 787372 ']' 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 787372 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787372 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787372' 00:06:23.144 killing process with pid 787372 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@973 -- # kill 787372 00:06:23.144 05:56:43 app_cmdline -- common/autotest_common.sh@978 -- # wait 787372 00:06:23.403 00:06:23.403 real 0m1.293s 00:06:23.403 user 0m1.510s 00:06:23.403 sys 0m0.440s 00:06:23.403 05:56:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.403 05:56:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.403 ************************************ 00:06:23.403 END TEST app_cmdline 00:06:23.403 ************************************ 00:06:23.662 05:56:43 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.662 05:56:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.662 05:56:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.662 05:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.662 ************************************ 00:06:23.662 START TEST version 00:06:23.662 ************************************ 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.662 * Looking for test storage... 00:06:23.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.662 05:56:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.662 05:56:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.662 05:56:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.662 05:56:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.662 05:56:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.662 05:56:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.662 05:56:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.662 05:56:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.662 05:56:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.662 05:56:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.662 05:56:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.662 05:56:43 version -- scripts/common.sh@344 -- # case "$op" in 00:06:23.662 05:56:43 version -- scripts/common.sh@345 -- # : 1 00:06:23.662 05:56:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.662 05:56:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.662 05:56:43 version -- scripts/common.sh@365 -- # decimal 1 00:06:23.662 05:56:43 version -- scripts/common.sh@353 -- # local d=1 00:06:23.662 05:56:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.662 05:56:43 version -- scripts/common.sh@355 -- # echo 1 00:06:23.662 05:56:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.662 05:56:43 version -- scripts/common.sh@366 -- # decimal 2 00:06:23.662 05:56:43 version -- scripts/common.sh@353 -- # local d=2 00:06:23.662 05:56:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.662 05:56:43 version -- scripts/common.sh@355 -- # echo 2 00:06:23.662 05:56:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.662 05:56:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.662 05:56:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.662 05:56:43 version -- scripts/common.sh@368 -- # return 0 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 05:56:43 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.662 --rc genhtml_branch_coverage=1 00:06:23.662 --rc genhtml_function_coverage=1 00:06:23.662 --rc genhtml_legend=1 00:06:23.662 --rc geninfo_all_blocks=1 00:06:23.662 --rc geninfo_unexecuted_blocks=1 00:06:23.662 00:06:23.662 ' 00:06:23.662 05:56:43 version -- app/version.sh@17 -- # get_header_version major 00:06:23.662 05:56:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # cut -f2 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.662 05:56:43 version -- app/version.sh@17 -- # major=25 00:06:23.662 05:56:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.662 05:56:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # cut -f2 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.662 05:56:43 version -- app/version.sh@18 -- # minor=1 00:06:23.662 05:56:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.662 05:56:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # cut -f2 00:06:23.662 05:56:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.921 05:56:43 version -- app/version.sh@19 -- # patch=0 00:06:23.921 05:56:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.921 05:56:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.921 05:56:43 version -- app/version.sh@14 -- # cut -f2 00:06:23.921 05:56:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.921 05:56:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.921 05:56:43 version -- app/version.sh@22 -- # version=25.1 00:06:23.921 05:56:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.921 05:56:43 version -- app/version.sh@28 -- # version=25.1rc0 00:06:23.921 05:56:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.921 05:56:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.921 05:56:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:23.921 05:56:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:23.921 00:06:23.921 real 0m0.246s 00:06:23.921 user 0m0.157s 00:06:23.921 sys 0m0.133s 00:06:23.921 05:56:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.921 05:56:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.921 ************************************ 00:06:23.921 END TEST version 00:06:23.921 ************************************ 00:06:23.921 05:56:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:23.921 05:56:43 -- spdk/autotest.sh@194 -- # uname -s 00:06:23.921 05:56:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:23.921 05:56:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.921 05:56:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.921 05:56:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:23.921 05:56:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.921 05:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.921 05:56:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:23.921 05:56:43 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:23.921 05:56:43 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.921 05:56:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.921 05:56:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.921 05:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.921 ************************************ 00:06:23.921 START TEST nvmf_tcp 00:06:23.921 ************************************ 00:06:23.921 05:56:43 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.921 * Looking for test storage... 00:06:23.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.921 05:56:44 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.921 05:56:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.921 05:56:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.180 05:56:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.180 --rc genhtml_branch_coverage=1 00:06:24.180 --rc genhtml_function_coverage=1 00:06:24.180 --rc genhtml_legend=1 00:06:24.180 --rc geninfo_all_blocks=1 00:06:24.180 --rc geninfo_unexecuted_blocks=1 00:06:24.180 00:06:24.180 ' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.180 --rc genhtml_branch_coverage=1 00:06:24.180 --rc genhtml_function_coverage=1 00:06:24.180 --rc genhtml_legend=1 00:06:24.180 --rc geninfo_all_blocks=1 00:06:24.180 --rc geninfo_unexecuted_blocks=1 00:06:24.180 00:06:24.180 ' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.180 --rc genhtml_branch_coverage=1 00:06:24.180 --rc genhtml_function_coverage=1 00:06:24.180 --rc genhtml_legend=1 00:06:24.180 --rc geninfo_all_blocks=1 00:06:24.180 --rc geninfo_unexecuted_blocks=1 00:06:24.180 00:06:24.180 ' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.180 --rc genhtml_branch_coverage=1 00:06:24.180 --rc genhtml_function_coverage=1 00:06:24.180 --rc genhtml_legend=1 00:06:24.180 --rc geninfo_all_blocks=1 00:06:24.180 --rc geninfo_unexecuted_blocks=1 00:06:24.180 00:06:24.180 ' 00:06:24.180 05:56:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:24.180 05:56:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:24.180 05:56:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.180 05:56:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.180 ************************************ 00:06:24.180 START TEST nvmf_target_core 00:06:24.180 ************************************ 00:06:24.180 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.180 * Looking for test storage... 00:06:24.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:24.180 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.180 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.180 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.440 --rc genhtml_branch_coverage=1 00:06:24.440 --rc genhtml_function_coverage=1 00:06:24.440 --rc genhtml_legend=1 00:06:24.440 --rc geninfo_all_blocks=1 00:06:24.440 --rc geninfo_unexecuted_blocks=1 00:06:24.440 00:06:24.440 ' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.440 --rc genhtml_branch_coverage=1 00:06:24.440 --rc genhtml_function_coverage=1 00:06:24.440 --rc genhtml_legend=1 00:06:24.440 --rc geninfo_all_blocks=1 00:06:24.440 --rc geninfo_unexecuted_blocks=1 00:06:24.440 00:06:24.440 ' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.440 --rc genhtml_branch_coverage=1 00:06:24.440 --rc genhtml_function_coverage=1 00:06:24.440 --rc genhtml_legend=1 00:06:24.440 --rc geninfo_all_blocks=1 00:06:24.440 --rc geninfo_unexecuted_blocks=1 00:06:24.440 00:06:24.440 ' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.440 --rc genhtml_branch_coverage=1 00:06:24.440 --rc genhtml_function_coverage=1 00:06:24.440 --rc genhtml_legend=1 00:06:24.440 --rc geninfo_all_blocks=1 00:06:24.440 --rc geninfo_unexecuted_blocks=1 00:06:24.440 00:06:24.440 ' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:24.440 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.441 ************************************ 00:06:24.441 START TEST nvmf_abort 00:06:24.441 ************************************ 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.441 * Looking for test storage... 00:06:24.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.441 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.700 --rc genhtml_branch_coverage=1 00:06:24.700 --rc genhtml_function_coverage=1 00:06:24.700 --rc genhtml_legend=1 00:06:24.700 --rc geninfo_all_blocks=1 00:06:24.700 --rc geninfo_unexecuted_blocks=1 00:06:24.700 00:06:24.700 ' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.700 --rc genhtml_branch_coverage=1 00:06:24.700 --rc genhtml_function_coverage=1 00:06:24.700 --rc genhtml_legend=1 00:06:24.700 --rc geninfo_all_blocks=1 00:06:24.700 --rc geninfo_unexecuted_blocks=1 00:06:24.700 00:06:24.700 ' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.700 --rc genhtml_branch_coverage=1 00:06:24.700 --rc genhtml_function_coverage=1 00:06:24.700 --rc genhtml_legend=1 00:06:24.700 --rc geninfo_all_blocks=1 00:06:24.700 --rc geninfo_unexecuted_blocks=1 00:06:24.700 00:06:24.700 ' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.700 --rc genhtml_branch_coverage=1 00:06:24.700 --rc genhtml_function_coverage=1 00:06:24.700 --rc genhtml_legend=1 00:06:24.700 --rc geninfo_all_blocks=1 00:06:24.700 --rc geninfo_unexecuted_blocks=1 00:06:24.700 00:06:24.700 ' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.700 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.701 05:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:31.269 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:31.269 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:31.269 Found net devices under 0000:af:00.0: cvl_0_0 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:31.269 Found net devices under 0000:af:00.1: cvl_0_1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.269 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:06:31.270 00:06:31.270 --- 10.0.0.2 ping statistics --- 00:06:31.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.270 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:31.270 00:06:31.270 --- 10.0.0.1 ping statistics --- 00:06:31.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.270 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=790990 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 790990 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 790990 ']' 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 [2024-12-15 05:56:50.673683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:31.270 [2024-12-15 05:56:50.673732] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.270 [2024-12-15 05:56:50.753171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.270 [2024-12-15 05:56:50.777004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.270 [2024-12-15 05:56:50.777040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.270 [2024-12-15 05:56:50.777047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.270 [2024-12-15 05:56:50.777053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.270 [2024-12-15 05:56:50.777058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.270 [2024-12-15 05:56:50.778296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.270 [2024-12-15 05:56:50.778400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.270 [2024-12-15 05:56:50.778401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 [2024-12-15 05:56:50.910059] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 Malloc0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 Delay0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 [2024-12-15 05:56:50.983976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.270 05:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:31.270 [2024-12-15 05:56:51.148133] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:33.168 Initializing NVMe Controllers 00:06:33.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:33.168 controller IO queue size 128 less than required 00:06:33.168 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:33.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:33.168 Initialization complete. Launching workers. 00:06:33.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37622 00:06:33.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37683, failed to submit 62 00:06:33.168 success 37626, unsuccessful 57, failed 0 00:06:33.168 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:33.168 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.168 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:33.426 rmmod nvme_tcp 00:06:33.426 rmmod nvme_fabrics 00:06:33.426 rmmod nvme_keyring 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 790990 ']' 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 790990 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 790990 ']' 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 790990 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 790990 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 790990' 00:06:33.426 killing process with pid 790990 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 790990 00:06:33.426 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 790990 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.685 05:56:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:35.589 00:06:35.589 real 0m11.263s 00:06:35.589 user 0m11.972s 00:06:35.589 sys 0m5.459s 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.589 ************************************ 00:06:35.589 END TEST nvmf_abort 00:06:35.589 ************************************ 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.589 05:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.849 ************************************ 00:06:35.849 START TEST nvmf_ns_hotplug_stress 00:06:35.849 ************************************ 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:35.849 * Looking for test storage... 00:06:35.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.849 --rc genhtml_branch_coverage=1 00:06:35.849 --rc genhtml_function_coverage=1 00:06:35.849 --rc genhtml_legend=1 00:06:35.849 --rc geninfo_all_blocks=1 00:06:35.849 --rc geninfo_unexecuted_blocks=1 00:06:35.849 00:06:35.849 ' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.849 --rc genhtml_branch_coverage=1 00:06:35.849 --rc genhtml_function_coverage=1 00:06:35.849 --rc genhtml_legend=1 00:06:35.849 --rc geninfo_all_blocks=1 00:06:35.849 --rc geninfo_unexecuted_blocks=1 00:06:35.849 00:06:35.849 ' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.849 --rc genhtml_branch_coverage=1 00:06:35.849 --rc genhtml_function_coverage=1 00:06:35.849 --rc genhtml_legend=1 00:06:35.849 --rc geninfo_all_blocks=1 00:06:35.849 --rc geninfo_unexecuted_blocks=1 00:06:35.849 00:06:35.849 ' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.849 --rc genhtml_branch_coverage=1 00:06:35.849 --rc genhtml_function_coverage=1 00:06:35.849 --rc genhtml_legend=1 00:06:35.849 --rc geninfo_all_blocks=1 00:06:35.849 --rc geninfo_unexecuted_blocks=1 00:06:35.849 00:06:35.849 ' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.849 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:35.850 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:36.109 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:36.109 05:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.678 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:42.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:42.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:42.679 Found net devices under 0000:af:00.0: cvl_0_0 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:42.679 Found net devices under 0000:af:00.1: cvl_0_1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:06:42.679 00:06:42.679 --- 10.0.0.2 ping statistics --- 00:06:42.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.679 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:06:42.679 00:06:42.679 --- 10.0.0.1 ping statistics --- 00:06:42.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.679 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=795076 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 795076 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 795076 ']' 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.679 05:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.679 [2024-12-15 05:57:02.011713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:42.680 [2024-12-15 05:57:02.011754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.680 [2024-12-15 05:57:02.089363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.680 [2024-12-15 05:57:02.110437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.680 [2024-12-15 05:57:02.110475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.680 [2024-12-15 05:57:02.110482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.680 [2024-12-15 05:57:02.110488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.680 [2024-12-15 05:57:02.110493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.680 [2024-12-15 05:57:02.111825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.680 [2024-12-15 05:57:02.111934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.680 [2024-12-15 05:57:02.111935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:42.680 [2024-12-15 05:57:02.415398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.680 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.937 [2024-12-15 05:57:02.820867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.937 05:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:42.937 05:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:43.195 Malloc0 00:06:43.195 05:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:43.455 Delay0 00:06:43.455 05:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.769 05:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:43.769 NULL1 00:06:43.769 05:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:44.099 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:44.099 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=795339 00:06:44.099 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:44.099 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.440 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.440 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:44.440 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:44.702 true 00:06:44.702 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:44.702 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.960 05:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.960 05:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:44.960 05:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:45.217 true 00:06:45.217 05:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:45.217 05:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.149 Read completed with error (sct=0, sc=11) 00:06:46.406 05:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.407 05:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:46.407 05:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:46.664 true 00:06:46.664 05:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:46.664 05:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.594 05:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.594 05:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:47.594 05:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:47.851 true 00:06:47.851 05:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:47.851 05:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.109 05:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.366 05:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:48.366 05:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:48.623 true 00:06:48.623 05:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:48.623 05:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.556 05:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.813 05:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:49.813 05:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:49.813 true 00:06:50.070 05:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:50.070 05:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.070 05:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.327 05:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:50.327 05:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:50.585 true 00:06:50.585 05:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:50.585 05:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.516 05:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.773 05:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:51.773 05:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:52.030 true 00:06:52.030 05:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:52.030 05:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.030 05:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.288 05:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:52.288 05:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:52.545 true 00:06:52.545 05:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:52.545 05:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 05:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.916 05:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:53.916 05:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:54.174 true 00:06:54.174 05:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:54.174 05:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.111 05:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.111 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:55.111 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:55.371 true 00:06:55.371 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:55.371 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.629 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.629 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:55.629 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:55.886 true 00:06:55.886 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:55.886 05:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.255 05:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.256 05:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:57.256 05:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:57.512 true 00:06:57.512 05:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:57.512 05:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.443 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.443 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:58.443 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:58.700 true 00:06:58.700 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:58.700 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.958 05:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.214 05:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:59.214 05:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:59.214 true 00:06:59.214 05:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:06:59.215 05:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 05:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.583 05:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:00.583 05:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:00.841 true 00:07:00.841 05:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:00.841 05:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.772 05:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.029 05:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:02.029 05:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:02.029 true 00:07:02.029 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:02.029 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.286 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.544 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:02.544 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:02.801 true 00:07:02.801 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:02.801 05:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.173 05:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.173 05:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:04.173 05:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:04.173 true 00:07:04.173 05:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:04.173 05:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.104 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.361 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:05.361 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:05.361 true 00:07:05.361 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:05.361 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.618 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.876 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:05.876 05:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:05.876 true 00:07:06.133 05:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:06.133 05:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.502 05:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.503 05:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:07.503 05:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:07.503 true 00:07:07.760 05:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:07.760 05:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.324 05:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.581 05:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:08.581 05:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:08.840 true 00:07:08.840 05:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:08.840 05:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.097 05:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.355 05:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:09.355 05:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:09.355 true 00:07:09.355 05:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:09.355 05:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 05:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.726 05:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:10.726 05:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:10.984 true 00:07:10.984 05:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:10.984 05:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.917 05:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.917 05:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:11.917 05:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:12.175 true 00:07:12.175 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:12.175 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.175 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.433 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:12.433 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:12.691 true 00:07:12.691 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:12.691 05:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 05:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.882 05:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:13.882 05:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:14.141 true 00:07:14.141 05:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:14.141 05:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.075 05:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.075 Initializing NVMe Controllers 00:07:15.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.075 Controller IO queue size 128, less than required. 00:07:15.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.075 Controller IO queue size 128, less than required. 00:07:15.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:15.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:15.075 Initialization complete. Launching workers. 00:07:15.075 ======================================================== 00:07:15.075 Latency(us) 00:07:15.075 Device Information : IOPS MiB/s Average min max 00:07:15.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1924.44 0.94 45607.54 2669.85 1012903.36 00:07:15.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17481.46 8.54 7304.19 2173.95 300313.70 00:07:15.075 ======================================================== 00:07:15.075 Total : 19405.90 9.48 11102.66 2173.95 1012903.36 00:07:15.075 00:07:15.075 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:15.075 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:15.332 true 00:07:15.332 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795339 00:07:15.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (795339) - No such process 00:07:15.332 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 795339 00:07:15.332 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.590 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.848 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:15.848 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:15.848 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:15.848 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.848 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:15.848 null0 00:07:16.106 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.106 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.106 05:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:16.106 null1 00:07:16.106 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.106 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.106 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:16.365 null2 00:07:16.365 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.365 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.365 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:16.623 null3 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:16.623 null4 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.623 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:16.882 null5 00:07:16.882 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.882 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.882 05:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:17.141 null6 00:07:17.141 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.141 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.141 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:17.400 null7 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.400 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 801409 801411 801412 801415 801416 801418 801420 801423 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.401 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.659 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.660 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.918 05:57:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.176 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.433 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.691 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.692 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.950 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.950 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.950 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.950 05:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.950 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.950 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.950 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.950 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.950 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.951 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.209 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.468 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.726 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.727 05:57:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.985 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.244 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.503 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.762 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.763 05:57:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.021 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.280 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.539 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.540 rmmod nvme_tcp 00:07:21.540 rmmod nvme_fabrics 00:07:21.540 rmmod nvme_keyring 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 795076 ']' 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 795076 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 795076 ']' 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 795076 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795076 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795076' 00:07:21.540 killing process with pid 795076 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 795076 00:07:21.540 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 795076 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.799 05:57:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.335 00:07:24.335 real 0m48.090s 00:07:24.335 user 3m16.240s 00:07:24.335 sys 0m15.715s 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.335 ************************************ 00:07:24.335 END TEST nvmf_ns_hotplug_stress 00:07:24.335 ************************************ 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.335 ************************************ 00:07:24.335 START TEST nvmf_delete_subsystem 00:07:24.335 ************************************ 00:07:24.335 05:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.335 * Looking for test storage... 00:07:24.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:24.335 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.336 --rc genhtml_branch_coverage=1 00:07:24.336 --rc genhtml_function_coverage=1 00:07:24.336 --rc genhtml_legend=1 00:07:24.336 --rc geninfo_all_blocks=1 00:07:24.336 --rc geninfo_unexecuted_blocks=1 00:07:24.336 00:07:24.336 ' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.336 05:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.906 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:30.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:30.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:30.907 Found net devices under 0000:af:00.0: cvl_0_0 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:30.907 Found net devices under 0000:af:00.1: cvl_0_1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.907 05:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:07:30.907 00:07:30.907 --- 10.0.0.2 ping statistics --- 00:07:30.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.907 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:07:30.907 00:07:30.907 --- 10.0.0.1 ping statistics --- 00:07:30.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.907 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:30.907 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=805725 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 805725 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 805725 ']' 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 [2024-12-15 05:57:50.189236] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:30.908 [2024-12-15 05:57:50.189295] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.908 [2024-12-15 05:57:50.264732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.908 [2024-12-15 05:57:50.285703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.908 [2024-12-15 05:57:50.285741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.908 [2024-12-15 05:57:50.285748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.908 [2024-12-15 05:57:50.285754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.908 [2024-12-15 05:57:50.285759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.908 [2024-12-15 05:57:50.286847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.908 [2024-12-15 05:57:50.286848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 [2024-12-15 05:57:50.430013] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 [2024-12-15 05:57:50.450224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 NULL1 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 Delay0 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=805755 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:30.908 05:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:30.908 [2024-12-15 05:57:50.561060] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:32.809 05:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.809 05:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.809 05:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 [2024-12-15 05:57:52.727884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc400 is same with the state(6) to be set 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 [2024-12-15 05:57:52.729979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc5e0 is same with the state(6) to be set 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 starting I/O failed: -6 00:07:32.809 Write completed with error (sct=0, sc=8) 00:07:32.809 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 Write completed with error (sct=0, sc=8) 00:07:32.810 Read completed with error (sct=0, sc=8) 00:07:32.810 starting I/O failed: -6 00:07:32.810 [2024-12-15 05:57:52.731267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac60000c80 is same with the state(6) to be set 00:07:33.745 [2024-12-15 05:57:53.697829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca190 is same with the state(6) to be set 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 [2024-12-15 05:57:53.731556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbf70 is same with the state(6) to be set 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 [2024-12-15 05:57:53.731674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cc7c0 is same with the state(6) to be set 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 [2024-12-15 05:57:53.734138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac6000d060 is same with the state(6) to be set 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Write completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 Read completed with error (sct=0, sc=8) 00:07:33.745 [2024-12-15 05:57:53.734711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fac6000d6c0 is same with the state(6) to be set 00:07:33.745 Initializing NVMe Controllers 00:07:33.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.745 Controller IO queue size 128, less than required. 00:07:33.745 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:33.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:33.745 Initialization complete. Launching workers. 00:07:33.745 ======================================================== 00:07:33.745 Latency(us) 00:07:33.745 Device Information : IOPS MiB/s Average min max 00:07:33.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.58 0.08 900412.44 927.78 1009334.86 00:07:33.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 186.48 0.09 900286.69 357.59 1011465.65 00:07:33.745 ======================================================== 00:07:33.745 Total : 354.06 0.17 900346.21 357.59 1011465.65 00:07:33.745 00:07:33.745 [2024-12-15 05:57:53.735251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ca190 (9): Bad file descriptor 00:07:33.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:33.745 05:57:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.745 05:57:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:33.745 05:57:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805755 00:07:33.745 05:57:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805755 00:07:34.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (805755) - No such process 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 805755 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 805755 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 805755 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.313 [2024-12-15 05:57:54.266528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=806430 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:34.313 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.313 [2024-12-15 05:57:54.352710] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:34.878 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.878 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:34.878 05:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.443 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.443 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:35.443 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.701 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.701 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:35.701 05:57:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.266 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.266 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:36.266 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.832 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.832 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:36.832 05:57:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.398 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.398 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:37.398 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.398 Initializing NVMe Controllers 00:07:37.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:37.399 Controller IO queue size 128, less than required. 00:07:37.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:37.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:37.399 Initialization complete. Launching workers. 00:07:37.399 ======================================================== 00:07:37.399 Latency(us) 00:07:37.399 Device Information : IOPS MiB/s Average min max 00:07:37.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002079.34 1000163.89 1006567.44 00:07:37.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003989.57 1000127.32 1041527.37 00:07:37.399 ======================================================== 00:07:37.399 Total : 256.00 0.12 1003034.46 1000127.32 1041527.37 00:07:37.399 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806430 00:07:37.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (806430) - No such process 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 806430 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.967 rmmod nvme_tcp 00:07:37.967 rmmod nvme_fabrics 00:07:37.967 rmmod nvme_keyring 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:37.967 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 805725 ']' 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 805725 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 805725 ']' 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 805725 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805725 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805725' 00:07:37.968 killing process with pid 805725 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 805725 00:07:37.968 05:57:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 805725 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.968 05:57:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.504 00:07:40.504 real 0m16.240s 00:07:40.504 user 0m29.331s 00:07:40.504 sys 0m5.498s 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.504 ************************************ 00:07:40.504 END TEST nvmf_delete_subsystem 00:07:40.504 ************************************ 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.504 ************************************ 00:07:40.504 START TEST nvmf_host_management 00:07:40.504 ************************************ 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.504 * Looking for test storage... 00:07:40.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.504 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.505 --rc genhtml_branch_coverage=1 00:07:40.505 --rc genhtml_function_coverage=1 00:07:40.505 --rc genhtml_legend=1 00:07:40.505 --rc geninfo_all_blocks=1 00:07:40.505 --rc geninfo_unexecuted_blocks=1 00:07:40.505 00:07:40.505 ' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.505 --rc genhtml_branch_coverage=1 00:07:40.505 --rc genhtml_function_coverage=1 00:07:40.505 --rc genhtml_legend=1 00:07:40.505 --rc geninfo_all_blocks=1 00:07:40.505 --rc geninfo_unexecuted_blocks=1 00:07:40.505 00:07:40.505 ' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.505 --rc genhtml_branch_coverage=1 00:07:40.505 --rc genhtml_function_coverage=1 00:07:40.505 --rc genhtml_legend=1 00:07:40.505 --rc geninfo_all_blocks=1 00:07:40.505 --rc geninfo_unexecuted_blocks=1 00:07:40.505 00:07:40.505 ' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.505 --rc genhtml_branch_coverage=1 00:07:40.505 --rc genhtml_function_coverage=1 00:07:40.505 --rc genhtml_legend=1 00:07:40.505 --rc geninfo_all_blocks=1 00:07:40.505 --rc geninfo_unexecuted_blocks=1 00:07:40.505 00:07:40.505 ' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.505 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.506 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.506 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.506 05:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:47.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:47.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:47.076 Found net devices under 0000:af:00.0: cvl_0_0 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:47.076 Found net devices under 0000:af:00.1: cvl_0_1 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.076 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:47.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:07:47.077 00:07:47.077 --- 10.0.0.2 ping statistics --- 00:07:47.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.077 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:47.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:47.077 00:07:47.077 --- 10.0.0.1 ping statistics --- 00:07:47.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.077 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=810574 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 810574 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810574 ']' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 [2024-12-15 05:58:06.525184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:47.077 [2024-12-15 05:58:06.525233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.077 [2024-12-15 05:58:06.605282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.077 [2024-12-15 05:58:06.628716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.077 [2024-12-15 05:58:06.628751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.077 [2024-12-15 05:58:06.628759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.077 [2024-12-15 05:58:06.628764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.077 [2024-12-15 05:58:06.628769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.077 [2024-12-15 05:58:06.630086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.077 [2024-12-15 05:58:06.630109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.077 [2024-12-15 05:58:06.630140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.077 [2024-12-15 05:58:06.630140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 [2024-12-15 05:58:06.761405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 Malloc0 00:07:47.077 [2024-12-15 05:58:06.828639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=810626 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 810626 /var/tmp/bdevperf.sock 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810626 ']' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:47.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.077 { 00:07:47.077 "params": { 00:07:47.077 "name": "Nvme$subsystem", 00:07:47.077 "trtype": "$TEST_TRANSPORT", 00:07:47.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.077 "adrfam": "ipv4", 00:07:47.077 "trsvcid": "$NVMF_PORT", 00:07:47.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.077 "hdgst": ${hdgst:-false}, 00:07:47.077 "ddgst": ${ddgst:-false} 00:07:47.077 }, 00:07:47.077 "method": "bdev_nvme_attach_controller" 00:07:47.077 } 00:07:47.077 EOF 00:07:47.077 )") 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:47.077 05:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.077 "params": { 00:07:47.077 "name": "Nvme0", 00:07:47.077 "trtype": "tcp", 00:07:47.077 "traddr": "10.0.0.2", 00:07:47.077 "adrfam": "ipv4", 00:07:47.077 "trsvcid": "4420", 00:07:47.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.078 "hdgst": false, 00:07:47.078 "ddgst": false 00:07:47.078 }, 00:07:47.078 "method": "bdev_nvme_attach_controller" 00:07:47.078 }' 00:07:47.078 [2024-12-15 05:58:06.923644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:47.078 [2024-12-15 05:58:06.923688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810626 ] 00:07:47.078 [2024-12-15 05:58:06.995796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.078 [2024-12-15 05:58:07.017955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.336 Running I/O for 10 seconds... 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:47.336 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:47.595 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:47.595 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.596 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.596 [2024-12-15 05:58:07.699752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.699980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.699986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.596 [2024-12-15 05:58:07.700319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.596 [2024-12-15 05:58:07.700327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.597 [2024-12-15 05:58:07.700766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.597 [2024-12-15 05:58:07.700773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1839f50 is same with the state(6) to be set 00:07:47.597 [2024-12-15 05:58:07.701737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:47.597 task offset: 106112 on job bdev=Nvme0n1 fails 00:07:47.597 00:07:47.597 Latency(us) 00:07:47.597 [2024-12-15T04:58:07.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.597 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:47.597 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:47.597 Verification LBA range: start 0x0 length 0x400 00:07:47.597 Nvme0n1 : 0.40 1897.59 118.60 158.13 0.00 30310.15 1513.57 26963.38 00:07:47.597 [2024-12-15T04:58:07.737Z] =================================================================================================================== 00:07:47.597 [2024-12-15T04:58:07.737Z] Total : 1897.59 118.60 158.13 0.00 30310.15 1513.57 26963.38 00:07:47.597 [2024-12-15 05:58:07.704086] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.597 [2024-12-15 05:58:07.704109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1826490 (9): Bad file descriptor 00:07:47.597 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.597 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:47.598 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.598 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 [2024-12-15 05:58:07.711333] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:47.598 [2024-12-15 05:58:07.711408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:47.598 [2024-12-15 05:58:07.711431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:47.598 [2024-12-15 05:58:07.711445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:47.598 [2024-12-15 05:58:07.711452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:47.598 [2024-12-15 05:58:07.711459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:47.598 [2024-12-15 05:58:07.711465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1826490 00:07:47.598 [2024-12-15 05:58:07.711482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1826490 (9): Bad file descriptor 00:07:47.598 [2024-12-15 05:58:07.711493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:47.598 [2024-12-15 05:58:07.711500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:47.598 [2024-12-15 05:58:07.711508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:47.598 [2024-12-15 05:58:07.711516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:47.598 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.598 05:58:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 810626 00:07:48.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (810626) - No such process 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.970 { 00:07:48.970 "params": { 00:07:48.970 "name": "Nvme$subsystem", 00:07:48.970 "trtype": "$TEST_TRANSPORT", 00:07:48.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.970 "adrfam": "ipv4", 00:07:48.970 "trsvcid": "$NVMF_PORT", 00:07:48.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.970 "hdgst": ${hdgst:-false}, 00:07:48.970 "ddgst": ${ddgst:-false} 00:07:48.970 }, 00:07:48.970 "method": "bdev_nvme_attach_controller" 00:07:48.970 } 00:07:48.970 EOF 00:07:48.970 )") 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:48.970 05:58:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.970 "params": { 00:07:48.970 "name": "Nvme0", 00:07:48.970 "trtype": "tcp", 00:07:48.970 "traddr": "10.0.0.2", 00:07:48.970 "adrfam": "ipv4", 00:07:48.970 "trsvcid": "4420", 00:07:48.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.970 "hdgst": false, 00:07:48.970 "ddgst": false 00:07:48.970 }, 00:07:48.970 "method": "bdev_nvme_attach_controller" 00:07:48.970 }' 00:07:48.970 [2024-12-15 05:58:08.771433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:48.970 [2024-12-15 05:58:08.771482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811030 ] 00:07:48.970 [2024-12-15 05:58:08.845143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.970 [2024-12-15 05:58:08.866275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.228 Running I/O for 1 seconds... 00:07:50.162 1984.00 IOPS, 124.00 MiB/s 00:07:50.162 Latency(us) 00:07:50.162 [2024-12-15T04:58:10.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.162 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.162 Verification LBA range: start 0x0 length 0x400 00:07:50.162 Nvme0n1 : 1.00 2041.36 127.59 0.00 0.00 30863.24 7115.34 27088.21 00:07:50.162 [2024-12-15T04:58:10.302Z] =================================================================================================================== 00:07:50.162 [2024-12-15T04:58:10.302Z] Total : 2041.36 127.59 0.00 0.00 30863.24 7115.34 27088.21 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.421 rmmod nvme_tcp 00:07:50.421 rmmod nvme_fabrics 00:07:50.421 rmmod nvme_keyring 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 810574 ']' 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 810574 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 810574 ']' 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 810574 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810574 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810574' 00:07:50.421 killing process with pid 810574 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 810574 00:07:50.421 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 810574 00:07:50.680 [2024-12-15 05:58:10.622281] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.680 05:58:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.586 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.586 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:52.586 00:07:52.586 real 0m12.477s 00:07:52.586 user 0m20.083s 00:07:52.586 sys 0m5.532s 00:07:52.586 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.586 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 ************************************ 00:07:52.586 END TEST nvmf_host_management 00:07:52.586 ************************************ 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.845 ************************************ 00:07:52.845 START TEST nvmf_lvol 00:07:52.845 ************************************ 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.845 * Looking for test storage... 00:07:52.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.845 --rc genhtml_branch_coverage=1 00:07:52.845 --rc genhtml_function_coverage=1 00:07:52.845 --rc genhtml_legend=1 00:07:52.845 --rc geninfo_all_blocks=1 00:07:52.845 --rc geninfo_unexecuted_blocks=1 00:07:52.845 00:07:52.845 ' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.845 --rc genhtml_branch_coverage=1 00:07:52.845 --rc genhtml_function_coverage=1 00:07:52.845 --rc genhtml_legend=1 00:07:52.845 --rc geninfo_all_blocks=1 00:07:52.845 --rc geninfo_unexecuted_blocks=1 00:07:52.845 00:07:52.845 ' 00:07:52.845 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.845 --rc genhtml_branch_coverage=1 00:07:52.845 --rc genhtml_function_coverage=1 00:07:52.846 --rc genhtml_legend=1 00:07:52.846 --rc geninfo_all_blocks=1 00:07:52.846 --rc geninfo_unexecuted_blocks=1 00:07:52.846 00:07:52.846 ' 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.846 --rc genhtml_branch_coverage=1 00:07:52.846 --rc genhtml_function_coverage=1 00:07:52.846 --rc genhtml_legend=1 00:07:52.846 --rc geninfo_all_blocks=1 00:07:52.846 --rc geninfo_unexecuted_blocks=1 00:07:52.846 00:07:52.846 ' 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.846 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.165 05:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.165 05:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.538 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:58.539 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:58.539 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.539 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.798 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:58.798 Found net devices under 0000:af:00.0: cvl_0_0 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:58.799 Found net devices under 0000:af:00.1: cvl_0_1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:07:58.799 00:07:58.799 --- 10.0.0.2 ping statistics --- 00:07:58.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.799 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:07:58.799 00:07:58.799 --- 10.0.0.1 ping statistics --- 00:07:58.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.799 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.799 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=814804 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 814804 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 814804 ']' 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.062 05:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.062 [2024-12-15 05:58:19.019096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:59.062 [2024-12-15 05:58:19.019139] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.062 [2024-12-15 05:58:19.094108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.062 [2024-12-15 05:58:19.115693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.062 [2024-12-15 05:58:19.115729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.062 [2024-12-15 05:58:19.115736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.062 [2024-12-15 05:58:19.115742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.062 [2024-12-15 05:58:19.115746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.062 [2024-12-15 05:58:19.116943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.062 [2024-12-15 05:58:19.117053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.062 [2024-12-15 05:58:19.117054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.320 [2024-12-15 05:58:19.416357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.320 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.579 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:59.579 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.838 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:59.838 05:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:00.096 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:00.355 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b6fc671a-434b-4271-a996-f03966e531b5 00:08:00.355 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6fc671a-434b-4271-a996-f03966e531b5 lvol 20 00:08:00.613 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c877d127-5e76-4ca7-ac27-1f4cb14cd57e 00:08:00.613 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.613 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c877d127-5e76-4ca7-ac27-1f4cb14cd57e 00:08:00.871 05:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.130 [2024-12-15 05:58:21.068241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.130 05:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.388 05:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:01.388 05:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=815276 00:08:01.388 05:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:02.322 05:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c877d127-5e76-4ca7-ac27-1f4cb14cd57e MY_SNAPSHOT 00:08:02.580 05:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ddb6e45d-9faa-4666-afcc-9d2dfc53c36d 00:08:02.580 05:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c877d127-5e76-4ca7-ac27-1f4cb14cd57e 30 00:08:02.838 05:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ddb6e45d-9faa-4666-afcc-9d2dfc53c36d MY_CLONE 00:08:03.095 05:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9d114988-c438-4034-9a82-d4590acb0342 00:08:03.095 05:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9d114988-c438-4034-9a82-d4590acb0342 00:08:03.660 05:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 815276 00:08:11.766 Initializing NVMe Controllers 00:08:11.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:11.766 Controller IO queue size 128, less than required. 00:08:11.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:11.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:11.766 Initialization complete. Launching workers. 00:08:11.766 ======================================================== 00:08:11.766 Latency(us) 00:08:11.766 Device Information : IOPS MiB/s Average min max 00:08:11.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12240.60 47.81 10456.12 1522.89 58496.56 00:08:11.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12475.20 48.73 10262.45 2272.17 57662.63 00:08:11.766 ======================================================== 00:08:11.766 Total : 24715.80 96.55 10358.36 1522.89 58496.56 00:08:11.766 00:08:11.766 05:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.025 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c877d127-5e76-4ca7-ac27-1f4cb14cd57e 00:08:12.283 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6fc671a-434b-4271-a996-f03966e531b5 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.542 rmmod nvme_tcp 00:08:12.542 rmmod nvme_fabrics 00:08:12.542 rmmod nvme_keyring 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 814804 ']' 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 814804 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 814804 ']' 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 814804 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814804 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814804' 00:08:12.542 killing process with pid 814804 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 814804 00:08:12.542 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 814804 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.801 05:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.336 00:08:15.336 real 0m22.055s 00:08:15.336 user 1m3.693s 00:08:15.336 sys 0m7.579s 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.336 ************************************ 00:08:15.336 END TEST nvmf_lvol 00:08:15.336 ************************************ 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.336 05:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.337 05:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.337 ************************************ 00:08:15.337 START TEST nvmf_lvs_grow 00:08:15.337 ************************************ 00:08:15.337 05:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.337 * Looking for test storage... 00:08:15.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.337 --rc genhtml_branch_coverage=1 00:08:15.337 --rc genhtml_function_coverage=1 00:08:15.337 --rc genhtml_legend=1 00:08:15.337 --rc geninfo_all_blocks=1 00:08:15.337 --rc geninfo_unexecuted_blocks=1 00:08:15.337 00:08:15.337 ' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.337 --rc genhtml_branch_coverage=1 00:08:15.337 --rc genhtml_function_coverage=1 00:08:15.337 --rc genhtml_legend=1 00:08:15.337 --rc geninfo_all_blocks=1 00:08:15.337 --rc geninfo_unexecuted_blocks=1 00:08:15.337 00:08:15.337 ' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.337 --rc genhtml_branch_coverage=1 00:08:15.337 --rc genhtml_function_coverage=1 00:08:15.337 --rc genhtml_legend=1 00:08:15.337 --rc geninfo_all_blocks=1 00:08:15.337 --rc geninfo_unexecuted_blocks=1 00:08:15.337 00:08:15.337 ' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.337 --rc genhtml_branch_coverage=1 00:08:15.337 --rc genhtml_function_coverage=1 00:08:15.337 --rc genhtml_legend=1 00:08:15.337 --rc geninfo_all_blocks=1 00:08:15.337 --rc geninfo_unexecuted_blocks=1 00:08:15.337 00:08:15.337 ' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.337 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.338 05:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:21.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:21.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:21.907 Found net devices under 0000:af:00.0: cvl_0_0 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:21.907 Found net devices under 0000:af:00.1: cvl_0_1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.907 05:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.907 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:08:21.907 00:08:21.907 --- 10.0.0.2 ping statistics --- 00:08:21.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.908 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:21.908 00:08:21.908 --- 10.0.0.1 ping statistics --- 00:08:21.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.908 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=820555 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 820555 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 820555 ']' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 [2024-12-15 05:58:41.213849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.908 [2024-12-15 05:58:41.213898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.908 [2024-12-15 05:58:41.295316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.908 [2024-12-15 05:58:41.317114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.908 [2024-12-15 05:58:41.317149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.908 [2024-12-15 05:58:41.317156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.908 [2024-12-15 05:58:41.317162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.908 [2024-12-15 05:58:41.317167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.908 [2024-12-15 05:58:41.317666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.908 [2024-12-15 05:58:41.616465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 ************************************ 00:08:21.908 START TEST lvs_grow_clean 00:08:21.908 ************************************ 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:21.908 05:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:22.166 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7deb329b-429e-4cf2-82b6-da63e143e71b lvol 150 00:08:22.425 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 00:08:22.425 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.425 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:22.683 [2024-12-15 05:58:42.651409] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:22.683 [2024-12-15 05:58:42.651457] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:22.683 true 00:08:22.683 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:22.683 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:22.941 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:22.941 05:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:22.941 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 00:08:23.200 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.458 [2024-12-15 05:58:43.377565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=821043 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 821043 /var/tmp/bdevperf.sock 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 821043 ']' 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.459 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:23.717 [2024-12-15 05:58:43.623072] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.717 [2024-12-15 05:58:43.623117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821043 ] 00:08:23.717 [2024-12-15 05:58:43.697555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.717 [2024-12-15 05:58:43.719356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.717 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.717 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:23.717 05:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.285 Nvme0n1 00:08:24.285 05:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:24.543 [ 00:08:24.543 { 00:08:24.543 "name": "Nvme0n1", 00:08:24.543 "aliases": [ 00:08:24.543 "1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5" 00:08:24.543 ], 00:08:24.543 "product_name": "NVMe disk", 00:08:24.543 "block_size": 4096, 00:08:24.543 "num_blocks": 38912, 00:08:24.543 "uuid": "1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5", 00:08:24.543 "numa_id": 1, 00:08:24.543 "assigned_rate_limits": { 00:08:24.543 "rw_ios_per_sec": 0, 00:08:24.543 "rw_mbytes_per_sec": 0, 00:08:24.543 "r_mbytes_per_sec": 0, 00:08:24.543 "w_mbytes_per_sec": 0 00:08:24.543 }, 00:08:24.543 "claimed": false, 00:08:24.543 "zoned": false, 00:08:24.543 "supported_io_types": { 00:08:24.543 "read": true, 00:08:24.543 "write": true, 00:08:24.544 "unmap": true, 00:08:24.544 "flush": true, 00:08:24.544 "reset": true, 00:08:24.544 "nvme_admin": true, 00:08:24.544 "nvme_io": true, 00:08:24.544 "nvme_io_md": false, 00:08:24.544 "write_zeroes": true, 00:08:24.544 "zcopy": false, 00:08:24.544 "get_zone_info": false, 00:08:24.544 "zone_management": false, 00:08:24.544 "zone_append": false, 00:08:24.544 "compare": true, 00:08:24.544 "compare_and_write": true, 00:08:24.544 "abort": true, 00:08:24.544 "seek_hole": false, 00:08:24.544 "seek_data": false, 00:08:24.544 "copy": true, 00:08:24.544 "nvme_iov_md": false 00:08:24.544 }, 00:08:24.544 "memory_domains": [ 00:08:24.544 { 00:08:24.544 "dma_device_id": "system", 00:08:24.544 "dma_device_type": 1 00:08:24.544 } 00:08:24.544 ], 00:08:24.544 "driver_specific": { 00:08:24.544 "nvme": [ 00:08:24.544 { 00:08:24.544 "trid": { 00:08:24.544 "trtype": "TCP", 00:08:24.544 "adrfam": "IPv4", 00:08:24.544 "traddr": "10.0.0.2", 00:08:24.544 "trsvcid": "4420", 00:08:24.544 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:24.544 }, 00:08:24.544 "ctrlr_data": { 00:08:24.544 "cntlid": 1, 00:08:24.544 "vendor_id": "0x8086", 00:08:24.544 "model_number": "SPDK bdev Controller", 00:08:24.544 "serial_number": "SPDK0", 00:08:24.544 "firmware_revision": "25.01", 00:08:24.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.544 "oacs": { 00:08:24.544 "security": 0, 00:08:24.544 "format": 0, 00:08:24.544 "firmware": 0, 00:08:24.544 "ns_manage": 0 00:08:24.544 }, 00:08:24.544 "multi_ctrlr": true, 00:08:24.544 "ana_reporting": false 00:08:24.544 }, 00:08:24.544 "vs": { 00:08:24.544 "nvme_version": "1.3" 00:08:24.544 }, 00:08:24.544 "ns_data": { 00:08:24.544 "id": 1, 00:08:24.544 "can_share": true 00:08:24.544 } 00:08:24.544 } 00:08:24.544 ], 00:08:24.544 "mp_policy": "active_passive" 00:08:24.544 } 00:08:24.544 } 00:08:24.544 ] 00:08:24.544 05:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=821101 00:08:24.544 05:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:24.544 05:58:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.544 Running I/O for 10 seconds... 00:08:25.480 Latency(us) 00:08:25.480 [2024-12-15T04:58:45.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.480 Nvme0n1 : 1.00 23607.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:25.480 [2024-12-15T04:58:45.620Z] =================================================================================================================== 00:08:25.480 [2024-12-15T04:58:45.620Z] Total : 23607.00 92.21 0.00 0.00 0.00 0.00 0.00 00:08:25.480 00:08:26.416 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:26.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.675 Nvme0n1 : 2.00 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:26.675 [2024-12-15T04:58:46.815Z] =================================================================================================================== 00:08:26.675 [2024-12-15T04:58:46.815Z] Total : 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:26.675 00:08:26.675 true 00:08:26.675 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:26.675 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:26.934 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:26.934 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:26.934 05:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 821101 00:08:27.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.500 Nvme0n1 : 3.00 23807.67 93.00 0.00 0.00 0.00 0.00 0.00 00:08:27.500 [2024-12-15T04:58:47.640Z] =================================================================================================================== 00:08:27.500 [2024-12-15T04:58:47.640Z] Total : 23807.67 93.00 0.00 0.00 0.00 0.00 0.00 00:08:27.500 00:08:28.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.437 Nvme0n1 : 4.00 23863.25 93.22 0.00 0.00 0.00 0.00 0.00 00:08:28.437 [2024-12-15T04:58:48.577Z] =================================================================================================================== 00:08:28.437 [2024-12-15T04:58:48.577Z] Total : 23863.25 93.22 0.00 0.00 0.00 0.00 0.00 00:08:28.437 00:08:29.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.813 Nvme0n1 : 5.00 23898.60 93.35 0.00 0.00 0.00 0.00 0.00 00:08:29.813 [2024-12-15T04:58:49.953Z] =================================================================================================================== 00:08:29.813 [2024-12-15T04:58:49.953Z] Total : 23898.60 93.35 0.00 0.00 0.00 0.00 0.00 00:08:29.813 00:08:30.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.749 Nvme0n1 : 6.00 23864.33 93.22 0.00 0.00 0.00 0.00 0.00 00:08:30.749 [2024-12-15T04:58:50.889Z] =================================================================================================================== 00:08:30.749 [2024-12-15T04:58:50.889Z] Total : 23864.33 93.22 0.00 0.00 0.00 0.00 0.00 00:08:30.749 00:08:31.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.684 Nvme0n1 : 7.00 23885.14 93.30 0.00 0.00 0.00 0.00 0.00 00:08:31.684 [2024-12-15T04:58:51.824Z] =================================================================================================================== 00:08:31.684 [2024-12-15T04:58:51.825Z] Total : 23885.14 93.30 0.00 0.00 0.00 0.00 0.00 00:08:31.685 00:08:32.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.619 Nvme0n1 : 8.00 23916.50 93.42 0.00 0.00 0.00 0.00 0.00 00:08:32.619 [2024-12-15T04:58:52.759Z] =================================================================================================================== 00:08:32.619 [2024-12-15T04:58:52.759Z] Total : 23916.50 93.42 0.00 0.00 0.00 0.00 0.00 00:08:32.619 00:08:33.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.555 Nvme0n1 : 9.00 23947.67 93.55 0.00 0.00 0.00 0.00 0.00 00:08:33.555 [2024-12-15T04:58:53.695Z] =================================================================================================================== 00:08:33.555 [2024-12-15T04:58:53.695Z] Total : 23947.67 93.55 0.00 0.00 0.00 0.00 0.00 00:08:33.555 00:08:34.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.490 Nvme0n1 : 10.00 23966.20 93.62 0.00 0.00 0.00 0.00 0.00 00:08:34.490 [2024-12-15T04:58:54.630Z] =================================================================================================================== 00:08:34.490 [2024-12-15T04:58:54.630Z] Total : 23966.20 93.62 0.00 0.00 0.00 0.00 0.00 00:08:34.490 00:08:34.490 00:08:34.490 Latency(us) 00:08:34.490 [2024-12-15T04:58:54.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.490 Nvme0n1 : 10.00 23967.93 93.62 0.00 0.00 5337.50 3120.76 10860.25 00:08:34.490 [2024-12-15T04:58:54.630Z] =================================================================================================================== 00:08:34.490 [2024-12-15T04:58:54.630Z] Total : 23967.93 93.62 0.00 0.00 5337.50 3120.76 10860.25 00:08:34.490 { 00:08:34.490 "results": [ 00:08:34.490 { 00:08:34.490 "job": "Nvme0n1", 00:08:34.490 "core_mask": "0x2", 00:08:34.490 "workload": "randwrite", 00:08:34.490 "status": "finished", 00:08:34.490 "queue_depth": 128, 00:08:34.490 "io_size": 4096, 00:08:34.490 "runtime": 10.004619, 00:08:34.490 "iops": 23967.929213496285, 00:08:34.490 "mibps": 93.62472349021986, 00:08:34.490 "io_failed": 0, 00:08:34.490 "io_timeout": 0, 00:08:34.490 "avg_latency_us": 5337.502017630506, 00:08:34.490 "min_latency_us": 3120.7619047619046, 00:08:34.490 "max_latency_us": 10860.251428571428 00:08:34.490 } 00:08:34.490 ], 00:08:34.490 "core_count": 1 00:08:34.490 } 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 821043 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 821043 ']' 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 821043 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.490 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821043 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821043' 00:08:34.749 killing process with pid 821043 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 821043 00:08:34.749 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.749 00:08:34.749 Latency(us) 00:08:34.749 [2024-12-15T04:58:54.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.749 [2024-12-15T04:58:54.889Z] =================================================================================================================== 00:08:34.749 [2024-12-15T04:58:54.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 821043 00:08:34.749 05:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.008 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.266 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:35.266 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:35.266 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:35.266 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:35.266 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:35.524 [2024-12-15 05:58:55.569662] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:35.524 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:35.783 request: 00:08:35.783 { 00:08:35.783 "uuid": "7deb329b-429e-4cf2-82b6-da63e143e71b", 00:08:35.783 "method": "bdev_lvol_get_lvstores", 00:08:35.783 "req_id": 1 00:08:35.783 } 00:08:35.783 Got JSON-RPC error response 00:08:35.783 response: 00:08:35.783 { 00:08:35.783 "code": -19, 00:08:35.783 "message": "No such device" 00:08:35.783 } 00:08:35.783 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:35.783 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.783 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.783 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.783 05:58:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:36.042 aio_bdev 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.042 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 -t 2000 00:08:36.301 [ 00:08:36.301 { 00:08:36.301 "name": "1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5", 00:08:36.301 "aliases": [ 00:08:36.301 "lvs/lvol" 00:08:36.301 ], 00:08:36.301 "product_name": "Logical Volume", 00:08:36.301 "block_size": 4096, 00:08:36.301 "num_blocks": 38912, 00:08:36.301 "uuid": "1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5", 00:08:36.301 "assigned_rate_limits": { 00:08:36.301 "rw_ios_per_sec": 0, 00:08:36.301 "rw_mbytes_per_sec": 0, 00:08:36.301 "r_mbytes_per_sec": 0, 00:08:36.301 "w_mbytes_per_sec": 0 00:08:36.301 }, 00:08:36.301 "claimed": false, 00:08:36.301 "zoned": false, 00:08:36.301 "supported_io_types": { 00:08:36.301 "read": true, 00:08:36.301 "write": true, 00:08:36.301 "unmap": true, 00:08:36.301 "flush": false, 00:08:36.301 "reset": true, 00:08:36.301 "nvme_admin": false, 00:08:36.301 "nvme_io": false, 00:08:36.301 "nvme_io_md": false, 00:08:36.301 "write_zeroes": true, 00:08:36.301 "zcopy": false, 00:08:36.301 "get_zone_info": false, 00:08:36.301 "zone_management": false, 00:08:36.301 "zone_append": false, 00:08:36.301 "compare": false, 00:08:36.301 "compare_and_write": false, 00:08:36.301 "abort": false, 00:08:36.301 "seek_hole": true, 00:08:36.301 "seek_data": true, 00:08:36.301 "copy": false, 00:08:36.301 "nvme_iov_md": false 00:08:36.301 }, 00:08:36.301 "driver_specific": { 00:08:36.301 "lvol": { 00:08:36.301 "lvol_store_uuid": "7deb329b-429e-4cf2-82b6-da63e143e71b", 00:08:36.301 "base_bdev": "aio_bdev", 00:08:36.301 "thin_provision": false, 00:08:36.301 "num_allocated_clusters": 38, 00:08:36.301 "snapshot": false, 00:08:36.301 "clone": false, 00:08:36.301 "esnap_clone": false 00:08:36.301 } 00:08:36.301 } 00:08:36.301 } 00:08:36.301 ] 00:08:36.301 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:36.301 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:36.301 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:36.560 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:36.560 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:36.560 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:36.818 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:36.818 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1243a7d6-0e3d-4c28-a5eb-98cfcc02fee5 00:08:36.818 05:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7deb329b-429e-4cf2-82b6-da63e143e71b 00:08:37.077 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.336 00:08:37.336 real 0m15.673s 00:08:37.336 user 0m15.174s 00:08:37.336 sys 0m1.541s 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:37.336 ************************************ 00:08:37.336 END TEST lvs_grow_clean 00:08:37.336 ************************************ 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.336 ************************************ 00:08:37.336 START TEST lvs_grow_dirty 00:08:37.336 ************************************ 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.336 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.595 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:37.595 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:37.854 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:37.854 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:37.854 05:58:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a lvol 150 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.112 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:38.371 [2024-12-15 05:58:58.381851] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:38.371 [2024-12-15 05:58:58.381902] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:38.371 true 00:08:38.371 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:38.371 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:38.630 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:38.630 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.630 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:38.889 05:58:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.148 [2024-12-15 05:58:59.091931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.148 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.406 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=823582 00:08:39.406 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 823582 /var/tmp/bdevperf.sock 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 823582 ']' 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:39.407 [2024-12-15 05:58:59.331295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:39.407 [2024-12-15 05:58:59.331342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823582 ] 00:08:39.407 [2024-12-15 05:58:59.404580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.407 [2024-12-15 05:58:59.426271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:39.407 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:39.975 Nvme0n1 00:08:39.975 05:58:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:40.233 [ 00:08:40.233 { 00:08:40.233 "name": "Nvme0n1", 00:08:40.233 "aliases": [ 00:08:40.233 "0fd9645e-0578-46a9-9e76-3d9e39161313" 00:08:40.233 ], 00:08:40.233 "product_name": "NVMe disk", 00:08:40.233 "block_size": 4096, 00:08:40.233 "num_blocks": 38912, 00:08:40.233 "uuid": "0fd9645e-0578-46a9-9e76-3d9e39161313", 00:08:40.233 "numa_id": 1, 00:08:40.233 "assigned_rate_limits": { 00:08:40.233 "rw_ios_per_sec": 0, 00:08:40.233 "rw_mbytes_per_sec": 0, 00:08:40.233 "r_mbytes_per_sec": 0, 00:08:40.233 "w_mbytes_per_sec": 0 00:08:40.233 }, 00:08:40.233 "claimed": false, 00:08:40.233 "zoned": false, 00:08:40.233 "supported_io_types": { 00:08:40.233 "read": true, 00:08:40.233 "write": true, 00:08:40.233 "unmap": true, 00:08:40.233 "flush": true, 00:08:40.233 "reset": true, 00:08:40.233 "nvme_admin": true, 00:08:40.233 "nvme_io": true, 00:08:40.233 "nvme_io_md": false, 00:08:40.233 "write_zeroes": true, 00:08:40.233 "zcopy": false, 00:08:40.233 "get_zone_info": false, 00:08:40.233 "zone_management": false, 00:08:40.233 "zone_append": false, 00:08:40.233 "compare": true, 00:08:40.233 "compare_and_write": true, 00:08:40.233 "abort": true, 00:08:40.233 "seek_hole": false, 00:08:40.233 "seek_data": false, 00:08:40.233 "copy": true, 00:08:40.233 "nvme_iov_md": false 00:08:40.233 }, 00:08:40.233 "memory_domains": [ 00:08:40.233 { 00:08:40.233 "dma_device_id": "system", 00:08:40.233 "dma_device_type": 1 00:08:40.234 } 00:08:40.234 ], 00:08:40.234 "driver_specific": { 00:08:40.234 "nvme": [ 00:08:40.234 { 00:08:40.234 "trid": { 00:08:40.234 "trtype": "TCP", 00:08:40.234 "adrfam": "IPv4", 00:08:40.234 "traddr": "10.0.0.2", 00:08:40.234 "trsvcid": "4420", 00:08:40.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:40.234 }, 00:08:40.234 "ctrlr_data": { 00:08:40.234 "cntlid": 1, 00:08:40.234 "vendor_id": "0x8086", 00:08:40.234 "model_number": "SPDK bdev Controller", 00:08:40.234 "serial_number": "SPDK0", 00:08:40.234 "firmware_revision": "25.01", 00:08:40.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.234 "oacs": { 00:08:40.234 "security": 0, 00:08:40.234 "format": 0, 00:08:40.234 "firmware": 0, 00:08:40.234 "ns_manage": 0 00:08:40.234 }, 00:08:40.234 "multi_ctrlr": true, 00:08:40.234 "ana_reporting": false 00:08:40.234 }, 00:08:40.234 "vs": { 00:08:40.234 "nvme_version": "1.3" 00:08:40.234 }, 00:08:40.234 "ns_data": { 00:08:40.234 "id": 1, 00:08:40.234 "can_share": true 00:08:40.234 } 00:08:40.234 } 00:08:40.234 ], 00:08:40.234 "mp_policy": "active_passive" 00:08:40.234 } 00:08:40.234 } 00:08:40.234 ] 00:08:40.234 05:59:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=823806 00:08:40.234 05:59:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:40.234 05:59:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:40.234 Running I/O for 10 seconds... 00:08:41.169 Latency(us) 00:08:41.169 [2024-12-15T04:59:01.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.169 Nvme0n1 : 1.00 23422.00 91.49 0.00 0.00 0.00 0.00 0.00 00:08:41.170 [2024-12-15T04:59:01.310Z] =================================================================================================================== 00:08:41.170 [2024-12-15T04:59:01.310Z] Total : 23422.00 91.49 0.00 0.00 0.00 0.00 0.00 00:08:41.170 00:08:42.158 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:42.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.158 Nvme0n1 : 2.00 23690.00 92.54 0.00 0.00 0.00 0.00 0.00 00:08:42.158 [2024-12-15T04:59:02.298Z] =================================================================================================================== 00:08:42.158 [2024-12-15T04:59:02.298Z] Total : 23690.00 92.54 0.00 0.00 0.00 0.00 0.00 00:08:42.158 00:08:42.417 true 00:08:42.417 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:42.417 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.417 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.417 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.417 05:59:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 823806 00:08:43.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.353 Nvme0n1 : 3.00 23737.33 92.72 0.00 0.00 0.00 0.00 0.00 00:08:43.353 [2024-12-15T04:59:03.493Z] =================================================================================================================== 00:08:43.353 [2024-12-15T04:59:03.493Z] Total : 23737.33 92.72 0.00 0.00 0.00 0.00 0.00 00:08:43.353 00:08:44.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.288 Nvme0n1 : 4.00 23822.50 93.06 0.00 0.00 0.00 0.00 0.00 00:08:44.288 [2024-12-15T04:59:04.428Z] =================================================================================================================== 00:08:44.288 [2024-12-15T04:59:04.428Z] Total : 23822.50 93.06 0.00 0.00 0.00 0.00 0.00 00:08:44.288 00:08:45.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.226 Nvme0n1 : 5.00 23873.40 93.26 0.00 0.00 0.00 0.00 0.00 00:08:45.226 [2024-12-15T04:59:05.366Z] =================================================================================================================== 00:08:45.226 [2024-12-15T04:59:05.366Z] Total : 23873.40 93.26 0.00 0.00 0.00 0.00 0.00 00:08:45.226 00:08:46.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.161 Nvme0n1 : 6.00 23918.33 93.43 0.00 0.00 0.00 0.00 0.00 00:08:46.161 [2024-12-15T04:59:06.301Z] =================================================================================================================== 00:08:46.161 [2024-12-15T04:59:06.301Z] Total : 23918.33 93.43 0.00 0.00 0.00 0.00 0.00 00:08:46.161 00:08:47.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.098 Nvme0n1 : 7.00 23958.57 93.59 0.00 0.00 0.00 0.00 0.00 00:08:47.098 [2024-12-15T04:59:07.238Z] =================================================================================================================== 00:08:47.098 [2024-12-15T04:59:07.238Z] Total : 23958.57 93.59 0.00 0.00 0.00 0.00 0.00 00:08:47.098 00:08:48.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.474 Nvme0n1 : 8.00 23988.38 93.70 0.00 0.00 0.00 0.00 0.00 00:08:48.474 [2024-12-15T04:59:08.614Z] =================================================================================================================== 00:08:48.474 [2024-12-15T04:59:08.614Z] Total : 23988.38 93.70 0.00 0.00 0.00 0.00 0.00 00:08:48.474 00:08:49.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.410 Nvme0n1 : 9.00 24005.22 93.77 0.00 0.00 0.00 0.00 0.00 00:08:49.410 [2024-12-15T04:59:09.550Z] =================================================================================================================== 00:08:49.410 [2024-12-15T04:59:09.550Z] Total : 24005.22 93.77 0.00 0.00 0.00 0.00 0.00 00:08:49.410 00:08:50.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.347 Nvme0n1 : 10.00 24011.80 93.80 0.00 0.00 0.00 0.00 0.00 00:08:50.347 [2024-12-15T04:59:10.487Z] =================================================================================================================== 00:08:50.347 [2024-12-15T04:59:10.487Z] Total : 24011.80 93.80 0.00 0.00 0.00 0.00 0.00 00:08:50.347 00:08:50.347 00:08:50.347 Latency(us) 00:08:50.347 [2024-12-15T04:59:10.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.347 Nvme0n1 : 10.00 24003.76 93.76 0.00 0.00 5328.96 2637.04 10860.25 00:08:50.347 [2024-12-15T04:59:10.487Z] =================================================================================================================== 00:08:50.347 [2024-12-15T04:59:10.487Z] Total : 24003.76 93.76 0.00 0.00 5328.96 2637.04 10860.25 00:08:50.347 { 00:08:50.347 "results": [ 00:08:50.347 { 00:08:50.347 "job": "Nvme0n1", 00:08:50.347 "core_mask": "0x2", 00:08:50.347 "workload": "randwrite", 00:08:50.347 "status": "finished", 00:08:50.347 "queue_depth": 128, 00:08:50.347 "io_size": 4096, 00:08:50.347 "runtime": 10.00339, 00:08:50.347 "iops": 24003.762724436416, 00:08:50.347 "mibps": 93.76469814232975, 00:08:50.347 "io_failed": 0, 00:08:50.347 "io_timeout": 0, 00:08:50.347 "avg_latency_us": 5328.960432103208, 00:08:50.347 "min_latency_us": 2637.0438095238096, 00:08:50.347 "max_latency_us": 10860.251428571428 00:08:50.347 } 00:08:50.347 ], 00:08:50.347 "core_count": 1 00:08:50.347 } 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 823582 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 823582 ']' 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 823582 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823582 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823582' 00:08:50.347 killing process with pid 823582 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 823582 00:08:50.347 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.347 00:08:50.347 Latency(us) 00:08:50.347 [2024-12-15T04:59:10.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.347 [2024-12-15T04:59:10.487Z] =================================================================================================================== 00:08:50.347 [2024-12-15T04:59:10.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 823582 00:08:50.347 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.606 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.864 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:50.864 05:59:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 820555 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 820555 00:08:51.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 820555 Killed "${NVMF_APP[@]}" "$@" 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=825602 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 825602 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 825602 ']' 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.123 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.123 [2024-12-15 05:59:11.184588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:51.123 [2024-12-15 05:59:11.184634] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.123 [2024-12-15 05:59:11.261541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.382 [2024-12-15 05:59:11.282573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.382 [2024-12-15 05:59:11.282607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.382 [2024-12-15 05:59:11.282617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.382 [2024-12-15 05:59:11.282624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.382 [2024-12-15 05:59:11.282629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.382 [2024-12-15 05:59:11.283103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.382 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.642 [2024-12-15 05:59:11.583231] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:51.642 [2024-12-15 05:59:11.583310] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:51.642 [2024-12-15 05:59:11.583335] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.642 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.901 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0fd9645e-0578-46a9-9e76-3d9e39161313 -t 2000 00:08:51.901 [ 00:08:51.901 { 00:08:51.901 "name": "0fd9645e-0578-46a9-9e76-3d9e39161313", 00:08:51.901 "aliases": [ 00:08:51.901 "lvs/lvol" 00:08:51.901 ], 00:08:51.901 "product_name": "Logical Volume", 00:08:51.901 "block_size": 4096, 00:08:51.901 "num_blocks": 38912, 00:08:51.901 "uuid": "0fd9645e-0578-46a9-9e76-3d9e39161313", 00:08:51.901 "assigned_rate_limits": { 00:08:51.901 "rw_ios_per_sec": 0, 00:08:51.901 "rw_mbytes_per_sec": 0, 00:08:51.901 "r_mbytes_per_sec": 0, 00:08:51.901 "w_mbytes_per_sec": 0 00:08:51.901 }, 00:08:51.901 "claimed": false, 00:08:51.901 "zoned": false, 00:08:51.901 "supported_io_types": { 00:08:51.901 "read": true, 00:08:51.901 "write": true, 00:08:51.901 "unmap": true, 00:08:51.901 "flush": false, 00:08:51.901 "reset": true, 00:08:51.901 "nvme_admin": false, 00:08:51.901 "nvme_io": false, 00:08:51.901 "nvme_io_md": false, 00:08:51.901 "write_zeroes": true, 00:08:51.901 "zcopy": false, 00:08:51.901 "get_zone_info": false, 00:08:51.901 "zone_management": false, 00:08:51.901 "zone_append": false, 00:08:51.901 "compare": false, 00:08:51.901 "compare_and_write": false, 00:08:51.901 "abort": false, 00:08:51.901 "seek_hole": true, 00:08:51.901 "seek_data": true, 00:08:51.901 "copy": false, 00:08:51.901 "nvme_iov_md": false 00:08:51.901 }, 00:08:51.901 "driver_specific": { 00:08:51.901 "lvol": { 00:08:51.901 "lvol_store_uuid": "0f631373-885e-4c20-ba4c-c2de92aa3d3a", 00:08:51.901 "base_bdev": "aio_bdev", 00:08:51.901 "thin_provision": false, 00:08:51.901 "num_allocated_clusters": 38, 00:08:51.901 "snapshot": false, 00:08:51.901 "clone": false, 00:08:51.901 "esnap_clone": false 00:08:51.901 } 00:08:51.901 } 00:08:51.901 } 00:08:51.901 ] 00:08:51.901 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:51.901 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:51.901 05:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:52.160 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:52.160 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:52.160 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:52.418 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:52.418 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:52.418 [2024-12-15 05:59:12.524221] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.419 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:52.677 request: 00:08:52.677 { 00:08:52.677 "uuid": "0f631373-885e-4c20-ba4c-c2de92aa3d3a", 00:08:52.677 "method": "bdev_lvol_get_lvstores", 00:08:52.677 "req_id": 1 00:08:52.677 } 00:08:52.677 Got JSON-RPC error response 00:08:52.677 response: 00:08:52.677 { 00:08:52.677 "code": -19, 00:08:52.677 "message": "No such device" 00:08:52.677 } 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:52.677 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.678 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.936 aio_bdev 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.936 05:59:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:53.195 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0fd9645e-0578-46a9-9e76-3d9e39161313 -t 2000 00:08:53.195 [ 00:08:53.195 { 00:08:53.195 "name": "0fd9645e-0578-46a9-9e76-3d9e39161313", 00:08:53.195 "aliases": [ 00:08:53.195 "lvs/lvol" 00:08:53.195 ], 00:08:53.195 "product_name": "Logical Volume", 00:08:53.195 "block_size": 4096, 00:08:53.195 "num_blocks": 38912, 00:08:53.195 "uuid": "0fd9645e-0578-46a9-9e76-3d9e39161313", 00:08:53.195 "assigned_rate_limits": { 00:08:53.195 "rw_ios_per_sec": 0, 00:08:53.195 "rw_mbytes_per_sec": 0, 00:08:53.195 "r_mbytes_per_sec": 0, 00:08:53.195 "w_mbytes_per_sec": 0 00:08:53.195 }, 00:08:53.195 "claimed": false, 00:08:53.195 "zoned": false, 00:08:53.195 "supported_io_types": { 00:08:53.195 "read": true, 00:08:53.195 "write": true, 00:08:53.195 "unmap": true, 00:08:53.195 "flush": false, 00:08:53.195 "reset": true, 00:08:53.195 "nvme_admin": false, 00:08:53.195 "nvme_io": false, 00:08:53.195 "nvme_io_md": false, 00:08:53.195 "write_zeroes": true, 00:08:53.195 "zcopy": false, 00:08:53.195 "get_zone_info": false, 00:08:53.195 "zone_management": false, 00:08:53.195 "zone_append": false, 00:08:53.195 "compare": false, 00:08:53.195 "compare_and_write": false, 00:08:53.195 "abort": false, 00:08:53.195 "seek_hole": true, 00:08:53.195 "seek_data": true, 00:08:53.195 "copy": false, 00:08:53.195 "nvme_iov_md": false 00:08:53.195 }, 00:08:53.195 "driver_specific": { 00:08:53.195 "lvol": { 00:08:53.195 "lvol_store_uuid": "0f631373-885e-4c20-ba4c-c2de92aa3d3a", 00:08:53.195 "base_bdev": "aio_bdev", 00:08:53.195 "thin_provision": false, 00:08:53.195 "num_allocated_clusters": 38, 00:08:53.195 "snapshot": false, 00:08:53.195 "clone": false, 00:08:53.195 "esnap_clone": false 00:08:53.195 } 00:08:53.195 } 00:08:53.195 } 00:08:53.195 ] 00:08:53.195 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:53.195 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:53.195 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:53.454 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:53.454 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:53.454 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:53.713 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:53.713 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0fd9645e-0578-46a9-9e76-3d9e39161313 00:08:53.713 05:59:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f631373-885e-4c20-ba4c-c2de92aa3d3a 00:08:53.971 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.230 00:08:54.230 real 0m16.817s 00:08:54.230 user 0m43.839s 00:08:54.230 sys 0m3.671s 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.230 ************************************ 00:08:54.230 END TEST lvs_grow_dirty 00:08:54.230 ************************************ 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:54.230 nvmf_trace.0 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.230 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.230 rmmod nvme_tcp 00:08:54.489 rmmod nvme_fabrics 00:08:54.489 rmmod nvme_keyring 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 825602 ']' 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 825602 ']' 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825602' 00:08:54.489 killing process with pid 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 825602 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.489 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.748 05:59:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.651 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.651 00:08:56.651 real 0m41.765s 00:08:56.651 user 1m4.570s 00:08:56.651 sys 0m10.139s 00:08:56.651 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.651 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.652 ************************************ 00:08:56.652 END TEST nvmf_lvs_grow 00:08:56.652 ************************************ 00:08:56.652 05:59:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:56.652 05:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.652 05:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.652 05:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.652 ************************************ 00:08:56.652 START TEST nvmf_bdev_io_wait 00:08:56.652 ************************************ 00:08:56.652 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:56.911 * Looking for test storage... 00:08:56.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.911 --rc genhtml_branch_coverage=1 00:08:56.911 --rc genhtml_function_coverage=1 00:08:56.911 --rc genhtml_legend=1 00:08:56.911 --rc geninfo_all_blocks=1 00:08:56.911 --rc geninfo_unexecuted_blocks=1 00:08:56.911 00:08:56.911 ' 00:08:56.911 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.911 --rc genhtml_branch_coverage=1 00:08:56.912 --rc genhtml_function_coverage=1 00:08:56.912 --rc genhtml_legend=1 00:08:56.912 --rc geninfo_all_blocks=1 00:08:56.912 --rc geninfo_unexecuted_blocks=1 00:08:56.912 00:08:56.912 ' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.912 --rc genhtml_branch_coverage=1 00:08:56.912 --rc genhtml_function_coverage=1 00:08:56.912 --rc genhtml_legend=1 00:08:56.912 --rc geninfo_all_blocks=1 00:08:56.912 --rc geninfo_unexecuted_blocks=1 00:08:56.912 00:08:56.912 ' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.912 --rc genhtml_branch_coverage=1 00:08:56.912 --rc genhtml_function_coverage=1 00:08:56.912 --rc genhtml_legend=1 00:08:56.912 --rc geninfo_all_blocks=1 00:08:56.912 --rc geninfo_unexecuted_blocks=1 00:08:56.912 00:08:56.912 ' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.912 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.913 05:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.482 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.482 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.482 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.482 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.482 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:09:03.483 00:09:03.483 --- 10.0.0.2 ping statistics --- 00:09:03.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.483 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:03.483 00:09:03.483 --- 10.0.0.1 ping statistics --- 00:09:03.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.483 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=829603 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 829603 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 829603 ']' 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.483 05:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 [2024-12-15 05:59:23.009807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.483 [2024-12-15 05:59:23.009851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.483 [2024-12-15 05:59:23.085438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.483 [2024-12-15 05:59:23.110147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.483 [2024-12-15 05:59:23.110182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.483 [2024-12-15 05:59:23.110189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.483 [2024-12-15 05:59:23.110195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.483 [2024-12-15 05:59:23.110200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.483 [2024-12-15 05:59:23.113013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.483 [2024-12-15 05:59:23.113056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.483 [2024-12-15 05:59:23.113167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.483 [2024-12-15 05:59:23.113166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 [2024-12-15 05:59:23.261209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 Malloc0 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.483 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.484 [2024-12-15 05:59:23.312418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=829826 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=829828 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.484 { 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme$subsystem", 00:09:03.484 "trtype": "$TEST_TRANSPORT", 00:09:03.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "$NVMF_PORT", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.484 "hdgst": ${hdgst:-false}, 00:09:03.484 "ddgst": ${ddgst:-false} 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 } 00:09:03.484 EOF 00:09:03.484 )") 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=829830 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.484 { 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme$subsystem", 00:09:03.484 "trtype": "$TEST_TRANSPORT", 00:09:03.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "$NVMF_PORT", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.484 "hdgst": ${hdgst:-false}, 00:09:03.484 "ddgst": ${ddgst:-false} 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 } 00:09:03.484 EOF 00:09:03.484 )") 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=829833 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.484 { 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme$subsystem", 00:09:03.484 "trtype": "$TEST_TRANSPORT", 00:09:03.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "$NVMF_PORT", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.484 "hdgst": ${hdgst:-false}, 00:09:03.484 "ddgst": ${ddgst:-false} 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 } 00:09:03.484 EOF 00:09:03.484 )") 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.484 { 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme$subsystem", 00:09:03.484 "trtype": "$TEST_TRANSPORT", 00:09:03.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "$NVMF_PORT", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.484 "hdgst": ${hdgst:-false}, 00:09:03.484 "ddgst": ${ddgst:-false} 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 } 00:09:03.484 EOF 00:09:03.484 )") 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 829826 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme1", 00:09:03.484 "trtype": "tcp", 00:09:03.484 "traddr": "10.0.0.2", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "4420", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.484 "hdgst": false, 00:09:03.484 "ddgst": false 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 }' 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme1", 00:09:03.484 "trtype": "tcp", 00:09:03.484 "traddr": "10.0.0.2", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "4420", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.484 "hdgst": false, 00:09:03.484 "ddgst": false 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 }' 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme1", 00:09:03.484 "trtype": "tcp", 00:09:03.484 "traddr": "10.0.0.2", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "4420", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.484 "hdgst": false, 00:09:03.484 "ddgst": false 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.484 }' 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:03.484 05:59:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.484 "params": { 00:09:03.484 "name": "Nvme1", 00:09:03.484 "trtype": "tcp", 00:09:03.484 "traddr": "10.0.0.2", 00:09:03.484 "adrfam": "ipv4", 00:09:03.484 "trsvcid": "4420", 00:09:03.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.484 "hdgst": false, 00:09:03.484 "ddgst": false 00:09:03.484 }, 00:09:03.484 "method": "bdev_nvme_attach_controller" 00:09:03.485 }' 00:09:03.485 [2024-12-15 05:59:23.363437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.485 [2024-12-15 05:59:23.363439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.485 [2024-12-15 05:59:23.363442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.485 [2024-12-15 05:59:23.363487] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-12-15 05:59:23.363488] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-15 05:59:23.363488] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:09:03.485 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:03.485 --proc-type=auto ] 00:09:03.485 [2024-12-15 05:59:23.366027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.485 [2024-12-15 05:59:23.366074] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:03.485 [2024-12-15 05:59:23.541649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.485 [2024-12-15 05:59:23.559584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:03.743 [2024-12-15 05:59:23.646020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.743 [2024-12-15 05:59:23.663265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:03.743 [2024-12-15 05:59:23.740576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.743 [2024-12-15 05:59:23.764037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.743 [2024-12-15 05:59:23.794325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.743 [2024-12-15 05:59:23.810454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:04.001 Running I/O for 1 seconds... 00:09:04.001 Running I/O for 1 seconds... 00:09:04.001 Running I/O for 1 seconds... 00:09:04.001 Running I/O for 1 seconds... 00:09:04.935 7581.00 IOPS, 29.61 MiB/s 00:09:04.935 Latency(us) 00:09:04.935 [2024-12-15T04:59:25.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.935 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:04.935 Nvme1n1 : 1.02 7606.71 29.71 0.00 0.00 16729.08 5960.66 28086.86 00:09:04.935 [2024-12-15T04:59:25.075Z] =================================================================================================================== 00:09:04.935 [2024-12-15T04:59:25.075Z] Total : 7606.71 29.71 0.00 0.00 16729.08 5960.66 28086.86 00:09:04.935 12657.00 IOPS, 49.44 MiB/s 00:09:04.935 Latency(us) 00:09:04.935 [2024-12-15T04:59:25.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.935 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:04.935 Nvme1n1 : 1.01 12711.70 49.66 0.00 0.00 10037.57 5398.92 21970.16 00:09:04.935 [2024-12-15T04:59:25.075Z] =================================================================================================================== 00:09:04.935 [2024-12-15T04:59:25.075Z] Total : 12711.70 49.66 0.00 0.00 10037.57 5398.92 21970.16 00:09:04.935 7330.00 IOPS, 28.63 MiB/s 00:09:04.935 Latency(us) 00:09:04.935 [2024-12-15T04:59:25.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.935 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:04.935 Nvme1n1 : 1.01 7428.67 29.02 0.00 0.00 17188.02 3651.29 38447.79 00:09:04.935 [2024-12-15T04:59:25.075Z] =================================================================================================================== 00:09:04.935 [2024-12-15T04:59:25.075Z] Total : 7428.67 29.02 0.00 0.00 17188.02 3651.29 38447.79 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 829828 00:09:05.197 244056.00 IOPS, 953.34 MiB/s 00:09:05.197 Latency(us) 00:09:05.197 [2024-12-15T04:59:25.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.197 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:05.197 Nvme1n1 : 1.00 243675.09 951.86 0.00 0.00 522.89 224.30 1552.58 00:09:05.197 [2024-12-15T04:59:25.337Z] =================================================================================================================== 00:09:05.197 [2024-12-15T04:59:25.337Z] Total : 243675.09 951.86 0.00 0.00 522.89 224.30 1552.58 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 829830 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 829833 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.197 rmmod nvme_tcp 00:09:05.197 rmmod nvme_fabrics 00:09:05.197 rmmod nvme_keyring 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 829603 ']' 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 829603 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 829603 ']' 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 829603 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.197 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829603 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829603' 00:09:05.456 killing process with pid 829603 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 829603 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 829603 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:05.456 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.457 05:59:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.992 00:09:07.992 real 0m10.814s 00:09:07.992 user 0m16.531s 00:09:07.992 sys 0m6.088s 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.992 ************************************ 00:09:07.992 END TEST nvmf_bdev_io_wait 00:09:07.992 ************************************ 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.992 ************************************ 00:09:07.992 START TEST nvmf_queue_depth 00:09:07.992 ************************************ 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:07.992 * Looking for test storage... 00:09:07.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.992 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.993 --rc genhtml_branch_coverage=1 00:09:07.993 --rc genhtml_function_coverage=1 00:09:07.993 --rc genhtml_legend=1 00:09:07.993 --rc geninfo_all_blocks=1 00:09:07.993 --rc geninfo_unexecuted_blocks=1 00:09:07.993 00:09:07.993 ' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.993 --rc genhtml_branch_coverage=1 00:09:07.993 --rc genhtml_function_coverage=1 00:09:07.993 --rc genhtml_legend=1 00:09:07.993 --rc geninfo_all_blocks=1 00:09:07.993 --rc geninfo_unexecuted_blocks=1 00:09:07.993 00:09:07.993 ' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.993 --rc genhtml_branch_coverage=1 00:09:07.993 --rc genhtml_function_coverage=1 00:09:07.993 --rc genhtml_legend=1 00:09:07.993 --rc geninfo_all_blocks=1 00:09:07.993 --rc geninfo_unexecuted_blocks=1 00:09:07.993 00:09:07.993 ' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.993 --rc genhtml_branch_coverage=1 00:09:07.993 --rc genhtml_function_coverage=1 00:09:07.993 --rc genhtml_legend=1 00:09:07.993 --rc geninfo_all_blocks=1 00:09:07.993 --rc geninfo_unexecuted_blocks=1 00:09:07.993 00:09:07.993 ' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.993 05:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.563 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.564 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.564 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.564 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.564 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:14.564 00:09:14.564 --- 10.0.0.2 ping statistics --- 00:09:14.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.564 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:09:14.564 00:09:14.564 --- 10.0.0.1 ping statistics --- 00:09:14.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.564 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.564 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=833561 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 833561 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833561 ']' 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.565 05:59:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 [2024-12-15 05:59:33.889144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:14.565 [2024-12-15 05:59:33.889191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.565 [2024-12-15 05:59:33.969990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.565 [2024-12-15 05:59:33.990662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.565 [2024-12-15 05:59:33.990699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.565 [2024-12-15 05:59:33.990706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.565 [2024-12-15 05:59:33.990711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.565 [2024-12-15 05:59:33.990716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.565 [2024-12-15 05:59:33.991197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 [2024-12-15 05:59:34.133157] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 Malloc0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 [2024-12-15 05:59:34.183397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=833617 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 833617 /var/tmp/bdevperf.sock 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833617 ']' 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 [2024-12-15 05:59:34.232018] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:14.565 [2024-12-15 05:59:34.232059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833617 ] 00:09:14.565 [2024-12-15 05:59:34.305424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.565 [2024-12-15 05:59:34.328235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.565 NVMe0n1 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.565 05:59:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.565 Running I/O for 10 seconds... 00:09:16.756 12180.00 IOPS, 47.58 MiB/s [2024-12-15T04:59:37.833Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-15T04:59:38.769Z] 12291.67 IOPS, 48.01 MiB/s [2024-12-15T04:59:39.706Z] 12420.75 IOPS, 48.52 MiB/s [2024-12-15T04:59:40.643Z] 12472.40 IOPS, 48.72 MiB/s [2024-12-15T04:59:42.022Z] 12449.00 IOPS, 48.63 MiB/s [2024-12-15T04:59:42.960Z] 12482.43 IOPS, 48.76 MiB/s [2024-12-15T04:59:43.898Z] 12526.00 IOPS, 48.93 MiB/s [2024-12-15T04:59:44.835Z] 12519.89 IOPS, 48.91 MiB/s [2024-12-15T04:59:44.835Z] 12571.20 IOPS, 49.11 MiB/s 00:09:24.695 Latency(us) 00:09:24.695 [2024-12-15T04:59:44.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.695 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:24.695 Verification LBA range: start 0x0 length 0x4000 00:09:24.695 NVMe0n1 : 10.07 12588.80 49.17 0.00 0.00 81078.18 18724.57 52179.14 00:09:24.695 [2024-12-15T04:59:44.835Z] =================================================================================================================== 00:09:24.695 [2024-12-15T04:59:44.835Z] Total : 12588.80 49.17 0.00 0.00 81078.18 18724.57 52179.14 00:09:24.695 { 00:09:24.695 "results": [ 00:09:24.695 { 00:09:24.695 "job": "NVMe0n1", 00:09:24.695 "core_mask": "0x1", 00:09:24.695 "workload": "verify", 00:09:24.695 "status": "finished", 00:09:24.695 "verify_range": { 00:09:24.695 "start": 0, 00:09:24.695 "length": 16384 00:09:24.695 }, 00:09:24.696 "queue_depth": 1024, 00:09:24.696 "io_size": 4096, 00:09:24.696 "runtime": 10.067365, 00:09:24.696 "iops": 12588.795578584863, 00:09:24.696 "mibps": 49.17498272884712, 00:09:24.696 "io_failed": 0, 00:09:24.696 "io_timeout": 0, 00:09:24.696 "avg_latency_us": 81078.17585514094, 00:09:24.696 "min_latency_us": 18724.571428571428, 00:09:24.696 "max_latency_us": 52179.13904761905 00:09:24.696 } 00:09:24.696 ], 00:09:24.696 "core_count": 1 00:09:24.696 } 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 833617 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833617 ']' 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833617 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833617 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833617' 00:09:24.696 killing process with pid 833617 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833617 00:09:24.696 Received shutdown signal, test time was about 10.000000 seconds 00:09:24.696 00:09:24.696 Latency(us) 00:09:24.696 [2024-12-15T04:59:44.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.696 [2024-12-15T04:59:44.836Z] =================================================================================================================== 00:09:24.696 [2024-12-15T04:59:44.836Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:24.696 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833617 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.955 rmmod nvme_tcp 00:09:24.955 rmmod nvme_fabrics 00:09:24.955 rmmod nvme_keyring 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:24.955 05:59:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 833561 ']' 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 833561 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833561 ']' 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833561 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833561 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833561' 00:09:24.955 killing process with pid 833561 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833561 00:09:24.955 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833561 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.214 05:59:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.753 00:09:27.753 real 0m19.647s 00:09:27.753 user 0m22.955s 00:09:27.753 sys 0m6.044s 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.753 ************************************ 00:09:27.753 END TEST nvmf_queue_depth 00:09:27.753 ************************************ 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.753 ************************************ 00:09:27.753 START TEST nvmf_target_multipath 00:09:27.753 ************************************ 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.753 * Looking for test storage... 00:09:27.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.753 --rc genhtml_branch_coverage=1 00:09:27.753 --rc genhtml_function_coverage=1 00:09:27.753 --rc genhtml_legend=1 00:09:27.753 --rc geninfo_all_blocks=1 00:09:27.753 --rc geninfo_unexecuted_blocks=1 00:09:27.753 00:09:27.753 ' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.753 --rc genhtml_branch_coverage=1 00:09:27.753 --rc genhtml_function_coverage=1 00:09:27.753 --rc genhtml_legend=1 00:09:27.753 --rc geninfo_all_blocks=1 00:09:27.753 --rc geninfo_unexecuted_blocks=1 00:09:27.753 00:09:27.753 ' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.753 --rc genhtml_branch_coverage=1 00:09:27.753 --rc genhtml_function_coverage=1 00:09:27.753 --rc genhtml_legend=1 00:09:27.753 --rc geninfo_all_blocks=1 00:09:27.753 --rc geninfo_unexecuted_blocks=1 00:09:27.753 00:09:27.753 ' 00:09:27.753 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.754 --rc genhtml_branch_coverage=1 00:09:27.754 --rc genhtml_function_coverage=1 00:09:27.754 --rc genhtml_legend=1 00:09:27.754 --rc geninfo_all_blocks=1 00:09:27.754 --rc geninfo_unexecuted_blocks=1 00:09:27.754 00:09:27.754 ' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.754 05:59:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:34.330 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.330 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.330 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.330 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.330 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:34.331 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:34.331 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:34.331 Found net devices under 0000:af:00.0: cvl_0_0 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:34.331 Found net devices under 0000:af:00.1: cvl_0_1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:09:34.331 00:09:34.331 --- 10.0.0.2 ping statistics --- 00:09:34.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.331 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:34.331 00:09:34.331 --- 10.0.0.1 ping statistics --- 00:09:34.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.331 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.331 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:34.332 only one NIC for nvmf test 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.332 rmmod nvme_tcp 00:09:34.332 rmmod nvme_fabrics 00:09:34.332 rmmod nvme_keyring 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.332 05:59:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.711 00:09:35.711 real 0m8.320s 00:09:35.711 user 0m1.944s 00:09:35.711 sys 0m4.395s 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:35.711 ************************************ 00:09:35.711 END TEST nvmf_target_multipath 00:09:35.711 ************************************ 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.711 ************************************ 00:09:35.711 START TEST nvmf_zcopy 00:09:35.711 ************************************ 00:09:35.711 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:35.971 * Looking for test storage... 00:09:35.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.971 --rc genhtml_branch_coverage=1 00:09:35.971 --rc genhtml_function_coverage=1 00:09:35.971 --rc genhtml_legend=1 00:09:35.971 --rc geninfo_all_blocks=1 00:09:35.971 --rc geninfo_unexecuted_blocks=1 00:09:35.971 00:09:35.971 ' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.971 --rc genhtml_branch_coverage=1 00:09:35.971 --rc genhtml_function_coverage=1 00:09:35.971 --rc genhtml_legend=1 00:09:35.971 --rc geninfo_all_blocks=1 00:09:35.971 --rc geninfo_unexecuted_blocks=1 00:09:35.971 00:09:35.971 ' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.971 --rc genhtml_branch_coverage=1 00:09:35.971 --rc genhtml_function_coverage=1 00:09:35.971 --rc genhtml_legend=1 00:09:35.971 --rc geninfo_all_blocks=1 00:09:35.971 --rc geninfo_unexecuted_blocks=1 00:09:35.971 00:09:35.971 ' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.971 --rc genhtml_branch_coverage=1 00:09:35.971 --rc genhtml_function_coverage=1 00:09:35.971 --rc genhtml_legend=1 00:09:35.971 --rc geninfo_all_blocks=1 00:09:35.971 --rc geninfo_unexecuted_blocks=1 00:09:35.971 00:09:35.971 ' 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:35.971 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.972 05:59:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:42.547 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:42.547 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:42.547 Found net devices under 0000:af:00.0: cvl_0_0 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:42.547 Found net devices under 0000:af:00.1: cvl_0_1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:09:42.547 00:09:42.547 --- 10.0.0.2 ping statistics --- 00:09:42.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.547 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:09:42.547 00:09:42.547 --- 10.0.0.1 ping statistics --- 00:09:42.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.547 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:42.547 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=842542 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 842542 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 842542 ']' 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.548 06:00:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 [2024-12-15 06:00:01.960864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:42.548 [2024-12-15 06:00:01.960917] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.548 [2024-12-15 06:00:02.040718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.548 [2024-12-15 06:00:02.062163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.548 [2024-12-15 06:00:02.062212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.548 [2024-12-15 06:00:02.062220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.548 [2024-12-15 06:00:02.062226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.548 [2024-12-15 06:00:02.062231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.548 [2024-12-15 06:00:02.062704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 [2024-12-15 06:00:02.200919] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 [2024-12-15 06:00:02.225152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 malloc0 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.548 { 00:09:42.548 "params": { 00:09:42.548 "name": "Nvme$subsystem", 00:09:42.548 "trtype": "$TEST_TRANSPORT", 00:09:42.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.548 "adrfam": "ipv4", 00:09:42.548 "trsvcid": "$NVMF_PORT", 00:09:42.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.548 "hdgst": ${hdgst:-false}, 00:09:42.548 "ddgst": ${ddgst:-false} 00:09:42.548 }, 00:09:42.548 "method": "bdev_nvme_attach_controller" 00:09:42.548 } 00:09:42.548 EOF 00:09:42.548 )") 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:42.548 06:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.548 "params": { 00:09:42.548 "name": "Nvme1", 00:09:42.548 "trtype": "tcp", 00:09:42.548 "traddr": "10.0.0.2", 00:09:42.548 "adrfam": "ipv4", 00:09:42.548 "trsvcid": "4420", 00:09:42.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.548 "hdgst": false, 00:09:42.548 "ddgst": false 00:09:42.548 }, 00:09:42.548 "method": "bdev_nvme_attach_controller" 00:09:42.548 }' 00:09:42.548 [2024-12-15 06:00:02.308507] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:42.548 [2024-12-15 06:00:02.308548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842670 ] 00:09:42.548 [2024-12-15 06:00:02.383366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.548 [2024-12-15 06:00:02.405681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.807 Running I/O for 10 seconds... 00:09:44.682 8764.00 IOPS, 68.47 MiB/s [2024-12-15T05:00:05.759Z] 8815.00 IOPS, 68.87 MiB/s [2024-12-15T05:00:07.138Z] 8850.33 IOPS, 69.14 MiB/s [2024-12-15T05:00:08.075Z] 8795.75 IOPS, 68.72 MiB/s [2024-12-15T05:00:09.013Z] 8820.80 IOPS, 68.91 MiB/s [2024-12-15T05:00:09.950Z] 8847.17 IOPS, 69.12 MiB/s [2024-12-15T05:00:10.888Z] 8858.43 IOPS, 69.21 MiB/s [2024-12-15T05:00:11.825Z] 8836.00 IOPS, 69.03 MiB/s [2024-12-15T05:00:12.896Z] 8839.67 IOPS, 69.06 MiB/s [2024-12-15T05:00:12.896Z] 8850.40 IOPS, 69.14 MiB/s 00:09:52.756 Latency(us) 00:09:52.756 [2024-12-15T05:00:12.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.756 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:52.756 Verification LBA range: start 0x0 length 0x1000 00:09:52.756 Nvme1n1 : 10.01 8853.90 69.17 0.00 0.00 14415.28 341.33 21845.33 00:09:52.756 [2024-12-15T05:00:12.896Z] =================================================================================================================== 00:09:52.756 [2024-12-15T05:00:12.896Z] Total : 8853.90 69.17 0.00 0.00 14415.28 341.33 21845.33 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844749 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:53.015 { 00:09:53.015 "params": { 00:09:53.015 "name": "Nvme$subsystem", 00:09:53.015 "trtype": "$TEST_TRANSPORT", 00:09:53.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.015 "adrfam": "ipv4", 00:09:53.015 "trsvcid": "$NVMF_PORT", 00:09:53.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.015 "hdgst": ${hdgst:-false}, 00:09:53.015 "ddgst": ${ddgst:-false} 00:09:53.015 }, 00:09:53.015 "method": "bdev_nvme_attach_controller" 00:09:53.015 } 00:09:53.015 EOF 00:09:53.015 )") 00:09:53.015 [2024-12-15 06:00:12.918898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:53.015 [2024-12-15 06:00:12.918932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:53.015 06:00:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:53.015 "params": { 00:09:53.015 "name": "Nvme1", 00:09:53.015 "trtype": "tcp", 00:09:53.015 "traddr": "10.0.0.2", 00:09:53.015 "adrfam": "ipv4", 00:09:53.015 "trsvcid": "4420", 00:09:53.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.015 "hdgst": false, 00:09:53.015 "ddgst": false 00:09:53.015 }, 00:09:53.015 "method": "bdev_nvme_attach_controller" 00:09:53.015 }' 00:09:53.015 [2024-12-15 06:00:12.930888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.930901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:12.942916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.942927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:12.954950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.954961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:12.958300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:53.015 [2024-12-15 06:00:12.958345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844749 ] 00:09:53.015 [2024-12-15 06:00:12.966980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.967006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:12.979017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.979028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:12.991056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:12.991073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.003077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.003087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.015108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.015118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.027140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.027154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.033280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.015 [2024-12-15 06:00:13.039170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.039183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.051205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.051219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.054259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.015 [2024-12-15 06:00:13.063240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.063252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.075280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.075303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.087309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.087326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.099339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.099353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.111374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.111388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.123402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.123416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.135452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.015 [2024-12-15 06:00:13.135474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.015 [2024-12-15 06:00:13.147471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.016 [2024-12-15 06:00:13.147486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.159502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.159517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.171538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.171554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.183563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.183574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.195594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.195603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.207628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.207639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.219661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.219674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.231695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.231706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.243726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.243739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.255759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.255769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.267797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.267810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.279827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.279836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.291859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.291869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.303894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.303905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.354275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.354293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.364058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.364071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 Running I/O for 5 seconds... 00:09:53.275 [2024-12-15 06:00:13.380705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.380725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.391924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.391944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.275 [2024-12-15 06:00:13.401244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.275 [2024-12-15 06:00:13.401263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.415642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.415663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.429173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.429192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.443049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.443069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.457057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.457075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.470518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.470537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.484289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.484308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.497736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.497755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.511026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.511045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.525005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.525028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.539100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.539119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.552335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.552354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.566015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.566049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.579711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.579730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.593364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.593387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.606675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.606695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.615651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.615669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.625357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.625375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.639130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.639149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.648235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.648254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.534 [2024-12-15 06:00:13.661848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.534 [2024-12-15 06:00:13.661867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.676451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.676471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.685448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.685467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.699509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.699529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.713321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.713341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.722066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.722084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.736260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.736285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.749683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.749701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.763663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.763682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.777403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.777422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.790976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.791016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.805104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.805122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.818963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.818982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.832172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.832190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.841450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.841468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.855021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.855041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.868188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.868207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.881490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.881509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.894664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.894683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.903871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.903890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.918094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.918112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.793 [2024-12-15 06:00:13.931389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.793 [2024-12-15 06:00:13.931410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.052 [2024-12-15 06:00:13.940883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.052 [2024-12-15 06:00:13.940902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.052 [2024-12-15 06:00:13.955217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.052 [2024-12-15 06:00:13.955235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:13.968771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:13.968790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:13.982259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:13.982278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:13.995439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:13.995457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.004448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.004467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.018567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.018585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.027335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.027354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.036737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.036755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.045746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.045764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.059924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.059943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.073182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.073201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.086877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.086895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.100759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.100779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.111450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.111469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.125247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.125266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.138813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.138835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.152642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.152661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.166458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.166477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.053 [2024-12-15 06:00:14.179824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.053 [2024-12-15 06:00:14.179843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.312 [2024-12-15 06:00:14.193948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.312 [2024-12-15 06:00:14.193970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.312 [2024-12-15 06:00:14.207828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.312 [2024-12-15 06:00:14.207848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.312 [2024-12-15 06:00:14.221710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.312 [2024-12-15 06:00:14.221730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.312 [2024-12-15 06:00:14.235125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.235145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.248495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.248515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.257212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.257231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.271196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.271216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.279784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.279803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.289419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.289438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.298673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.298693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.312726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.312744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.326616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.326635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.335473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.335492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.344690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.344708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.358971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.358990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 17063.00 IOPS, 133.30 MiB/s [2024-12-15T05:00:14.453Z] [2024-12-15 06:00:14.372299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.372319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.386170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.386188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.399710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.399729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.413070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.413090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.426600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.426619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.435343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.435362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.313 [2024-12-15 06:00:14.449426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.313 [2024-12-15 06:00:14.449445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.462843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.462868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.476953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.476972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.490633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.490650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.504203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.504222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.517738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.517757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.530979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.531005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.544848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.544866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.553662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.553681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.567692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.567710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.580799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.580818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.594300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.594319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.603682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.603700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.617537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.617556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.630894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.630913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.644114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.572 [2024-12-15 06:00:14.644132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.572 [2024-12-15 06:00:14.657544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.573 [2024-12-15 06:00:14.657562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.573 [2024-12-15 06:00:14.670799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.573 [2024-12-15 06:00:14.670819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.573 [2024-12-15 06:00:14.684383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.573 [2024-12-15 06:00:14.684402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.573 [2024-12-15 06:00:14.697790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.573 [2024-12-15 06:00:14.697808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.711430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.711455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.724864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.724883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.738826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.738850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.752624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.752642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.765973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.765997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.779421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.779440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.793226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.793245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.806616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.806634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.820171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.820190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.833540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.833558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.847062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.847081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.860505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.860524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.874483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.874513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.888228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.888247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.897424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.897444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.911357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.911379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.924978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.925002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.938449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.938468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.952269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.952289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.832 [2024-12-15 06:00:14.961649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.832 [2024-12-15 06:00:14.961672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:14.975909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:14.975929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:14.985463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:14.985483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:14.995139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:14.995157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.009234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.009253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.022534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.022553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.035826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.035845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.049246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.049265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.062690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.062708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.071447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.071465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.085450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.085469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.098769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.098789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.111946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.111965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.120607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.120625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.129786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.129805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.139161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.139179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.153267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.153286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.166437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.166456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.180141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.180160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.189114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.189136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.203104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.203123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.091 [2024-12-15 06:00:15.216649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.091 [2024-12-15 06:00:15.216668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.230431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.230451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.243876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.243894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.256912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.256931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.270587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.270606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.284502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.284521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.297742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.297760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.311468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.311489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.320415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.320434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.334742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.334761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.348111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.348131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.361694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.361713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 17202.50 IOPS, 134.39 MiB/s [2024-12-15T05:00:15.489Z] [2024-12-15 06:00:15.374908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.374928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.384260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.384279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.394095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.394113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.408194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.408212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.421693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.421711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.430739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.430758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.444383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.444402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.457477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.457496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.470978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.471003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.349 [2024-12-15 06:00:15.484784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.349 [2024-12-15 06:00:15.484806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.498896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.498917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.509598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.509618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.518824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.518843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.532932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.532951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.546255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.546274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.559427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.559447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.573102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.573121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.582421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.582440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.596398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.596417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.609263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.609283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.623024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.623045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.636588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.636607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.649968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.649987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.663368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.663388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.676795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.676815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.690546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.690566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.703569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.703588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.717149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.717168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.730725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.730743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.608 [2024-12-15 06:00:15.744382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.608 [2024-12-15 06:00:15.744401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.758040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.758059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.771507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.771526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.785247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.785266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.798582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.798600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.812399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.812418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.821741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.821760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.835278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.835298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.848532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.848551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.862139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.862158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.875531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.875550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.889135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.889154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.902775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.902793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.912492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.912511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.926508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.926529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.940038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.940058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.953472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.953491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.967302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.967321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.980924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.980943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.867 [2024-12-15 06:00:15.994706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.867 [2024-12-15 06:00:15.994724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.008075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.008095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.022060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.022079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.035147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.035165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.048563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.048582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.062428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.062447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.075514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.075533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.089102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.089121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.102532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.102550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.111698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.111717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.125908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.125926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.139974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.140000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.153298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.153316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.167033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.167056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.180659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.180677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.194105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.194124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.202908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.202927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.217008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.217026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.225976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.226000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.239985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.240009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.126 [2024-12-15 06:00:16.253403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.126 [2024-12-15 06:00:16.253422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.267532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.267552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.278816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.278835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.292707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.292727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.306158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.306177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.319934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.319952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.333555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.333573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.347491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.347509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.361108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.361127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.370291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.370309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 17219.33 IOPS, 134.53 MiB/s [2024-12-15T05:00:16.525Z] [2024-12-15 06:00:16.384368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.384387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.397571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.397589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.411178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.411202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.424702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.424721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.438220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.438240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.451547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.451566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.385 [2024-12-15 06:00:16.464583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.385 [2024-12-15 06:00:16.464602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.386 [2024-12-15 06:00:16.473545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.386 [2024-12-15 06:00:16.473563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.386 [2024-12-15 06:00:16.487354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.386 [2024-12-15 06:00:16.487373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.386 [2024-12-15 06:00:16.500617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.386 [2024-12-15 06:00:16.500636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.386 [2024-12-15 06:00:16.514796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.386 [2024-12-15 06:00:16.514815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.529058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.529076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.542104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.542123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.555563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.555581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.565214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.565234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.578956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.578975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.592411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.592429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.606183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.606202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.619971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.619998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.633842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.633860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.647401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.647419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.661013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.661036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.674563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.674581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.688196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.688215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.701571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.701590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.715557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.715576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.728755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.728774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.742394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.742412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.755970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.755988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.645 [2024-12-15 06:00:16.769746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.645 [2024-12-15 06:00:16.769766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.783489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.783510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.796724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.796743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.805827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.805845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.820393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.820412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.833725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.833745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.847646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.847666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.861249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.861269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.870310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.870330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.884797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.884816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.898216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.898234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.911829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.911848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.925411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.925430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.939113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.939134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.952540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.952560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.966414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.966433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.979906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.979926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:16.993633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:16.993653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.904 [2024-12-15 06:00:17.007283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.904 [2024-12-15 06:00:17.007302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.905 [2024-12-15 06:00:17.016216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.905 [2024-12-15 06:00:17.016235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.905 [2024-12-15 06:00:17.030451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.905 [2024-12-15 06:00:17.030471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.043576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.043597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.057139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.057159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.066393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.066412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.080834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.080855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.094786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.094805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.108508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.108528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.122207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.122226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.135375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.135393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.148850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.148869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.162307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.162326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.175367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.175386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.188828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.188848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.202767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.202788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.216133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.216153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.225540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.225559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.239781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.239800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.253097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.253115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.267114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.267133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.280721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.280740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.164 [2024-12-15 06:00:17.294022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.164 [2024-12-15 06:00:17.294040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.307376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.307396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.321297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.321315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.334472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.334490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.348034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.348052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.361728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.361747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.375503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.375522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 17234.75 IOPS, 134.65 MiB/s [2024-12-15T05:00:17.563Z] [2024-12-15 06:00:17.384277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.384296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.393538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.393562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.407715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.407734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.421360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.421379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.434977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.435001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.448601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.448620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.462214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.462233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.475764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.475783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.489526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 06:00:17.489546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 06:00:17.503306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.424 [2024-12-15 06:00:17.503325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.424 [2024-12-15 06:00:17.516854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.424 [2024-12-15 06:00:17.516873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.424 [2024-12-15 06:00:17.530349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.424 [2024-12-15 06:00:17.530368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.424 [2024-12-15 06:00:17.543921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.424 [2024-12-15 06:00:17.543940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.424 [2024-12-15 06:00:17.557845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.424 [2024-12-15 06:00:17.557863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 06:00:17.571460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 06:00:17.571480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.585056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.585075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.598657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.598677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.611923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.611941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.625552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.625571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.639072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.639092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.652304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.652331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.665739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.665757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.679631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.679649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.689609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.689628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.703299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.703318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.716830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.716848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.730472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.730491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.743985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.744009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.752827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.752846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.766712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.766730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.780348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.780366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.794163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.794182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.802983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.803011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.683 [2024-12-15 06:00:17.812477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.683 [2024-12-15 06:00:17.812496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.826636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.826655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.835260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.835279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.844604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.844622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.858599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.858617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.867816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.867834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.881340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.881363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.895127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.895146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.908355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.908374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.941 [2024-12-15 06:00:17.917606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.941 [2024-12-15 06:00:17.917624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.926767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.926785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.940722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.940739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.954259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.954277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.967723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.967743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.981653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.981674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:17.990270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:17.990291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.004084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.004102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.017758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.017775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.026625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.026644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.040652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.040671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.054605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.054623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.068199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.068218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 06:00:18.077195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 06:00:18.077214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.091540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.091560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.104398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.104416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.118329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.118352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.131950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.131971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.145931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.145950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.159081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.159099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.172399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.172419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.186025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.186043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.199565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.199584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.213505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.213524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.227120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.227141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.240529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.240547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.254367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.254387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.268058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.268078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.281467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.281487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.295375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.295395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.309000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.309020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.322491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.322509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.201 [2024-12-15 06:00:18.336197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.201 [2024-12-15 06:00:18.336216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.349908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.349928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.363379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.363399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.376702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.376721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 17234.80 IOPS, 134.65 MiB/s 00:09:58.461 Latency(us) 00:09:58.461 [2024-12-15T05:00:18.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.461 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:58.461 Nvme1n1 : 5.00 17244.13 134.72 0.00 0.00 7416.79 3464.05 18225.25 00:09:58.461 [2024-12-15T05:00:18.601Z] =================================================================================================================== 00:09:58.461 [2024-12-15T05:00:18.601Z] Total : 17244.13 134.72 0.00 0.00 7416.79 3464.05 18225.25 00:09:58.461 [2024-12-15 06:00:18.386629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.386647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.398654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.398669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.410704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.410726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.422727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.422745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.434759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.434779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.446785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.446802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.458817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.458837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.470847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.470861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.482879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.482896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.494910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.494923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.506944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.506958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.518973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.518988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 [2024-12-15 06:00:18.531005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.461 [2024-12-15 06:00:18.531016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844749) - No such process 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844749 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 delay0 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.461 06:00:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:58.720 [2024-12-15 06:00:18.685926] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:05.289 Initializing NVMe Controllers 00:10:05.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.289 Initialization complete. Launching workers. 00:10:05.289 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 71 00:10:05.289 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 358, failed to submit 33 00:10:05.289 success 173, unsuccessful 185, failed 0 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.289 rmmod nvme_tcp 00:10:05.289 rmmod nvme_fabrics 00:10:05.289 rmmod nvme_keyring 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 842542 ']' 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 842542 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 842542 ']' 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 842542 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842542 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842542' 00:10:05.289 killing process with pid 842542 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 842542 00:10:05.289 06:00:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 842542 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.289 06:00:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.196 00:10:07.196 real 0m31.332s 00:10:07.196 user 0m42.261s 00:10:07.196 sys 0m10.747s 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 ************************************ 00:10:07.196 END TEST nvmf_zcopy 00:10:07.196 ************************************ 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 ************************************ 00:10:07.196 START TEST nvmf_nmic 00:10:07.196 ************************************ 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:07.196 * Looking for test storage... 00:10:07.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.196 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.456 --rc genhtml_branch_coverage=1 00:10:07.456 --rc genhtml_function_coverage=1 00:10:07.456 --rc genhtml_legend=1 00:10:07.456 --rc geninfo_all_blocks=1 00:10:07.456 --rc geninfo_unexecuted_blocks=1 00:10:07.456 00:10:07.456 ' 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.456 --rc genhtml_branch_coverage=1 00:10:07.456 --rc genhtml_function_coverage=1 00:10:07.456 --rc genhtml_legend=1 00:10:07.456 --rc geninfo_all_blocks=1 00:10:07.456 --rc geninfo_unexecuted_blocks=1 00:10:07.456 00:10:07.456 ' 00:10:07.456 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.457 --rc genhtml_branch_coverage=1 00:10:07.457 --rc genhtml_function_coverage=1 00:10:07.457 --rc genhtml_legend=1 00:10:07.457 --rc geninfo_all_blocks=1 00:10:07.457 --rc geninfo_unexecuted_blocks=1 00:10:07.457 00:10:07.457 ' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.457 06:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.030 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:14.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:14.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:14.031 Found net devices under 0000:af:00.0: cvl_0_0 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:14.031 Found net devices under 0000:af:00.1: cvl_0_1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:10:14.031 00:10:14.031 --- 10.0.0.2 ping statistics --- 00:10:14.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.031 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:10:14.031 00:10:14.031 --- 10.0.0.1 ping statistics --- 00:10:14.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.031 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=850132 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 850132 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 850132 ']' 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.031 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.031 [2024-12-15 06:00:33.532445] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:14.032 [2024-12-15 06:00:33.532497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.032 [2024-12-15 06:00:33.613596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.032 [2024-12-15 06:00:33.637404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.032 [2024-12-15 06:00:33.637441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.032 [2024-12-15 06:00:33.637450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.032 [2024-12-15 06:00:33.637456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.032 [2024-12-15 06:00:33.637460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.032 [2024-12-15 06:00:33.638923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.032 [2024-12-15 06:00:33.638939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.032 [2024-12-15 06:00:33.639045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.032 [2024-12-15 06:00:33.639046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 [2024-12-15 06:00:33.779597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 Malloc0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 [2024-12-15 06:00:33.848725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:14.032 test case1: single bdev can't be used in multiple subsystems 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 [2024-12-15 06:00:33.876645] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:14.032 [2024-12-15 06:00:33.876667] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:14.032 [2024-12-15 06:00:33.876675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.032 request: 00:10:14.032 { 00:10:14.032 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.032 "namespace": { 00:10:14.032 "bdev_name": "Malloc0", 00:10:14.032 "no_auto_visible": false, 00:10:14.032 "hide_metadata": false 00:10:14.032 }, 00:10:14.032 "method": "nvmf_subsystem_add_ns", 00:10:14.032 "req_id": 1 00:10:14.032 } 00:10:14.032 Got JSON-RPC error response 00:10:14.032 response: 00:10:14.032 { 00:10:14.032 "code": -32602, 00:10:14.032 "message": "Invalid parameters" 00:10:14.032 } 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:14.032 Adding namespace failed - expected result. 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:14.032 test case2: host connect to nvmf target in multiple paths 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 [2024-12-15 06:00:33.888784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.032 06:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.970 06:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:16.347 06:00:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.347 06:00:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:16.347 06:00:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.347 06:00:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:16.347 06:00:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:18.254 06:00:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.254 [global] 00:10:18.254 thread=1 00:10:18.254 invalidate=1 00:10:18.254 rw=write 00:10:18.254 time_based=1 00:10:18.254 runtime=1 00:10:18.254 ioengine=libaio 00:10:18.254 direct=1 00:10:18.254 bs=4096 00:10:18.254 iodepth=1 00:10:18.254 norandommap=0 00:10:18.254 numjobs=1 00:10:18.254 00:10:18.254 verify_dump=1 00:10:18.254 verify_backlog=512 00:10:18.254 verify_state_save=0 00:10:18.254 do_verify=1 00:10:18.254 verify=crc32c-intel 00:10:18.254 [job0] 00:10:18.254 filename=/dev/nvme0n1 00:10:18.254 Could not set queue depth (nvme0n1) 00:10:18.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.513 fio-3.35 00:10:18.513 Starting 1 thread 00:10:19.891 00:10:19.891 job0: (groupid=0, jobs=1): err= 0: pid=851183: Sun Dec 15 06:00:39 2024 00:10:19.891 read: IOPS=22, BW=89.7KiB/s (91.8kB/s)(92.0KiB/1026msec) 00:10:19.891 slat (nsec): min=9598, max=25134, avg=22136.96, stdev=2839.09 00:10:19.891 clat (usec): min=40721, max=41088, avg=40955.52, stdev=106.54 00:10:19.891 lat (usec): min=40730, max=41109, avg=40977.65, stdev=107.97 00:10:19.891 clat percentiles (usec): 00:10:19.891 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:19.891 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:19.891 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:19.891 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:19.891 | 99.99th=[41157] 00:10:19.891 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:19.891 slat (nsec): min=10596, max=44563, avg=11742.17, stdev=2374.78 00:10:19.891 clat (usec): min=117, max=337, avg=147.48, stdev=23.06 00:10:19.891 lat (usec): min=129, max=375, avg=159.22, stdev=23.78 00:10:19.891 clat percentiles (usec): 00:10:19.891 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 129], 00:10:19.891 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 153], 00:10:19.891 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 182], 00:10:19.891 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 338], 99.95th=[ 338], 00:10:19.891 | 99.99th=[ 338] 00:10:19.891 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.891 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.891 lat (usec) : 250=95.51%, 500=0.19% 00:10:19.891 lat (msec) : 50=4.30% 00:10:19.891 cpu : usr=0.29%, sys=1.07%, ctx=535, majf=0, minf=1 00:10:19.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.891 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.891 00:10:19.891 Run status group 0 (all jobs): 00:10:19.891 READ: bw=89.7KiB/s (91.8kB/s), 89.7KiB/s-89.7KiB/s (91.8kB/s-91.8kB/s), io=92.0KiB (94.2kB), run=1026-1026msec 00:10:19.891 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:10:19.891 00:10:19.891 Disk stats (read/write): 00:10:19.891 nvme0n1: ios=69/512, merge=0/0, ticks=807/68, in_queue=875, util=91.58% 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.891 06:00:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.891 rmmod nvme_tcp 00:10:19.891 rmmod nvme_fabrics 00:10:19.891 rmmod nvme_keyring 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 850132 ']' 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 850132 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 850132 ']' 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 850132 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.891 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850132 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850132' 00:10:20.151 killing process with pid 850132 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 850132 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 850132 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.151 06:00:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.690 00:10:22.690 real 0m15.143s 00:10:22.690 user 0m33.518s 00:10:22.690 sys 0m5.267s 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.690 ************************************ 00:10:22.690 END TEST nvmf_nmic 00:10:22.690 ************************************ 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.690 ************************************ 00:10:22.690 START TEST nvmf_fio_target 00:10:22.690 ************************************ 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.690 * Looking for test storage... 00:10:22.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.690 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:22.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.691 --rc genhtml_branch_coverage=1 00:10:22.691 --rc genhtml_function_coverage=1 00:10:22.691 --rc genhtml_legend=1 00:10:22.691 --rc geninfo_all_blocks=1 00:10:22.691 --rc geninfo_unexecuted_blocks=1 00:10:22.691 00:10:22.691 ' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:22.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.691 --rc genhtml_branch_coverage=1 00:10:22.691 --rc genhtml_function_coverage=1 00:10:22.691 --rc genhtml_legend=1 00:10:22.691 --rc geninfo_all_blocks=1 00:10:22.691 --rc geninfo_unexecuted_blocks=1 00:10:22.691 00:10:22.691 ' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:22.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.691 --rc genhtml_branch_coverage=1 00:10:22.691 --rc genhtml_function_coverage=1 00:10:22.691 --rc genhtml_legend=1 00:10:22.691 --rc geninfo_all_blocks=1 00:10:22.691 --rc geninfo_unexecuted_blocks=1 00:10:22.691 00:10:22.691 ' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:22.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.691 --rc genhtml_branch_coverage=1 00:10:22.691 --rc genhtml_function_coverage=1 00:10:22.691 --rc genhtml_legend=1 00:10:22.691 --rc geninfo_all_blocks=1 00:10:22.691 --rc geninfo_unexecuted_blocks=1 00:10:22.691 00:10:22.691 ' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.691 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.692 06:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:29.265 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:29.266 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:29.266 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:29.266 Found net devices under 0000:af:00.0: cvl_0_0 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:29.266 Found net devices under 0000:af:00.1: cvl_0_1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:10:29.266 00:10:29.266 --- 10.0.0.2 ping statistics --- 00:10:29.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.266 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:29.266 00:10:29.266 --- 10.0.0.1 ping statistics --- 00:10:29.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.266 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.266 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=854881 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 854881 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 854881 ']' 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.267 [2024-12-15 06:00:48.582480] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:29.267 [2024-12-15 06:00:48.582527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.267 [2024-12-15 06:00:48.663250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.267 [2024-12-15 06:00:48.686635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.267 [2024-12-15 06:00:48.686674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.267 [2024-12-15 06:00:48.686683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.267 [2024-12-15 06:00:48.686690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.267 [2024-12-15 06:00:48.686695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.267 [2024-12-15 06:00:48.688168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.267 [2024-12-15 06:00:48.688276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.267 [2024-12-15 06:00:48.688384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.267 [2024-12-15 06:00:48.688386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.267 06:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.267 [2024-12-15 06:00:49.005070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.267 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.267 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:29.267 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.526 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:29.526 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.785 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:29.785 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.785 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:29.785 06:00:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:30.045 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.304 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:30.304 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.563 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:30.563 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.822 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:30.822 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:30.822 06:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.081 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:31.081 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.340 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:31.340 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.600 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.600 [2024-12-15 06:00:51.688817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.600 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:31.859 06:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:32.118 06:00:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:33.498 06:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:35.404 06:00:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:35.404 [global] 00:10:35.404 thread=1 00:10:35.404 invalidate=1 00:10:35.404 rw=write 00:10:35.404 time_based=1 00:10:35.404 runtime=1 00:10:35.404 ioengine=libaio 00:10:35.404 direct=1 00:10:35.404 bs=4096 00:10:35.404 iodepth=1 00:10:35.404 norandommap=0 00:10:35.404 numjobs=1 00:10:35.404 00:10:35.404 verify_dump=1 00:10:35.404 verify_backlog=512 00:10:35.404 verify_state_save=0 00:10:35.404 do_verify=1 00:10:35.404 verify=crc32c-intel 00:10:35.404 [job0] 00:10:35.404 filename=/dev/nvme0n1 00:10:35.404 [job1] 00:10:35.404 filename=/dev/nvme0n2 00:10:35.404 [job2] 00:10:35.404 filename=/dev/nvme0n3 00:10:35.404 [job3] 00:10:35.404 filename=/dev/nvme0n4 00:10:35.404 Could not set queue depth (nvme0n1) 00:10:35.404 Could not set queue depth (nvme0n2) 00:10:35.404 Could not set queue depth (nvme0n3) 00:10:35.404 Could not set queue depth (nvme0n4) 00:10:35.663 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.663 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.663 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.663 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.663 fio-3.35 00:10:35.663 Starting 4 threads 00:10:37.066 00:10:37.066 job0: (groupid=0, jobs=1): err= 0: pid=856200: Sun Dec 15 06:00:56 2024 00:10:37.066 read: IOPS=2385, BW=9542KiB/s (9771kB/s)(9552KiB/1001msec) 00:10:37.066 slat (nsec): min=6213, max=22533, avg=7003.97, stdev=636.68 00:10:37.066 clat (usec): min=172, max=313, avg=216.09, stdev=14.72 00:10:37.066 lat (usec): min=179, max=320, avg=223.09, stdev=14.76 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:10:37.066 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:10:37.066 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 239], 00:10:37.066 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 306], 00:10:37.066 | 99.99th=[ 314] 00:10:37.066 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:37.066 slat (nsec): min=9146, max=60399, avg=10077.38, stdev=1533.33 00:10:37.066 clat (usec): min=115, max=296, avg=168.40, stdev=35.19 00:10:37.066 lat (usec): min=125, max=356, avg=178.48, stdev=35.35 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 143], 00:10:37.066 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:10:37.066 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 223], 95.00th=[ 260], 00:10:37.066 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 293], 00:10:37.066 | 99.99th=[ 297] 00:10:37.066 bw ( KiB/s): min=12288, max=12288, per=50.40%, avg=12288.00, stdev= 0.00, samples=1 00:10:37.066 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:37.066 lat (usec) : 250=95.01%, 500=4.99% 00:10:37.066 cpu : usr=3.40%, sys=3.50%, ctx=4949, majf=0, minf=1 00:10:37.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 issued rwts: total=2388,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.066 job1: (groupid=0, jobs=1): err= 0: pid=856201: Sun Dec 15 06:00:56 2024 00:10:37.066 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:10:37.066 slat (nsec): min=7883, max=21103, avg=9692.30, stdev=3241.08 00:10:37.066 clat (usec): min=4979, max=41990, avg=39565.57, stdev=7547.26 00:10:37.066 lat (usec): min=4991, max=42001, avg=39575.26, stdev=7546.77 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 4948], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:37.066 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.066 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:37.066 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.066 | 99.99th=[42206] 00:10:37.066 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:37.066 slat (nsec): min=9334, max=43187, avg=10551.35, stdev=1878.18 00:10:37.066 clat (usec): min=125, max=288, avg=170.81, stdev=19.87 00:10:37.066 lat (usec): min=136, max=331, avg=181.36, stdev=20.41 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:10:37.066 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:10:37.066 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:10:37.066 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 289], 00:10:37.066 | 99.99th=[ 289] 00:10:37.066 bw ( KiB/s): min= 4096, max= 4096, per=16.80%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.066 lat (usec) : 250=95.33%, 500=0.37% 00:10:37.066 lat (msec) : 10=0.19%, 50=4.11% 00:10:37.066 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=2 00:10:37.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.066 job2: (groupid=0, jobs=1): err= 0: pid=856202: Sun Dec 15 06:00:56 2024 00:10:37.066 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:10:37.066 slat (nsec): min=10405, max=23749, avg=21899.50, stdev=2618.89 00:10:37.066 clat (usec): min=40893, max=41048, avg=40971.44, stdev=38.19 00:10:37.066 lat (usec): min=40915, max=41072, avg=40993.34, stdev=37.34 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:37.066 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.066 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:37.066 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.066 | 99.99th=[41157] 00:10:37.066 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:37.066 slat (nsec): min=10037, max=45515, avg=13272.62, stdev=2871.81 00:10:37.066 clat (usec): min=138, max=267, avg=191.18, stdev=33.65 00:10:37.066 lat (usec): min=148, max=306, avg=204.46, stdev=34.43 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:10:37.066 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 186], 00:10:37.066 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 243], 00:10:37.066 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 269], 99.95th=[ 269], 00:10:37.066 | 99.99th=[ 269] 00:10:37.066 bw ( KiB/s): min= 4096, max= 4096, per=16.80%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.066 lat (usec) : 250=94.76%, 500=1.12% 00:10:37.066 lat (msec) : 50=4.12% 00:10:37.066 cpu : usr=0.50%, sys=0.99%, ctx=534, majf=0, minf=1 00:10:37.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.066 job3: (groupid=0, jobs=1): err= 0: pid=856204: Sun Dec 15 06:00:56 2024 00:10:37.066 read: IOPS=2062, BW=8252KiB/s (8450kB/s)(8260KiB/1001msec) 00:10:37.066 slat (nsec): min=7233, max=25490, avg=8378.55, stdev=1381.30 00:10:37.066 clat (usec): min=184, max=407, avg=234.98, stdev=25.43 00:10:37.066 lat (usec): min=192, max=416, avg=243.36, stdev=25.54 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:10:37.066 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:37.066 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:10:37.066 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 314], 99.95th=[ 334], 00:10:37.066 | 99.99th=[ 408] 00:10:37.066 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:37.066 slat (usec): min=10, max=22007, avg=20.62, stdev=434.74 00:10:37.066 clat (usec): min=120, max=271, avg=167.97, stdev=27.21 00:10:37.066 lat (usec): min=131, max=22220, avg=188.59, stdev=436.49 00:10:37.066 clat percentiles (usec): 00:10:37.066 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:37.066 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:37.066 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 206], 95.00th=[ 239], 00:10:37.066 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 258], 99.95th=[ 273], 00:10:37.066 | 99.99th=[ 273] 00:10:37.066 bw ( KiB/s): min= 9168, max= 9168, per=37.60%, avg=9168.00, stdev= 0.00, samples=1 00:10:37.066 iops : min= 2292, max= 2292, avg=2292.00, stdev= 0.00, samples=1 00:10:37.066 lat (usec) : 250=88.15%, 500=11.85% 00:10:37.066 cpu : usr=3.60%, sys=7.90%, ctx=4628, majf=0, minf=1 00:10:37.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.066 issued rwts: total=2065,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.066 00:10:37.066 Run status group 0 (all jobs): 00:10:37.066 READ: bw=17.4MiB/s (18.3MB/s), 87.3KiB/s-9542KiB/s (89.4kB/s-9771kB/s), io=17.6MiB (18.4MB), run=1001-1008msec 00:10:37.067 WRITE: bw=23.8MiB/s (25.0MB/s), 2032KiB/s-9.99MiB/s (2081kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1008msec 00:10:37.067 00:10:37.067 Disk stats (read/write): 00:10:37.067 nvme0n1: ios=2098/2259, merge=0/0, ticks=449/355, in_queue=804, util=86.77% 00:10:37.067 nvme0n2: ios=33/512, merge=0/0, ticks=757/78, in_queue=835, util=87.08% 00:10:37.067 nvme0n3: ios=18/512, merge=0/0, ticks=738/92, in_queue=830, util=88.95% 00:10:37.067 nvme0n4: ios=1826/2048, merge=0/0, ticks=1381/317, in_queue=1698, util=98.42% 00:10:37.067 06:00:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:37.067 [global] 00:10:37.067 thread=1 00:10:37.067 invalidate=1 00:10:37.067 rw=randwrite 00:10:37.067 time_based=1 00:10:37.067 runtime=1 00:10:37.067 ioengine=libaio 00:10:37.067 direct=1 00:10:37.067 bs=4096 00:10:37.067 iodepth=1 00:10:37.067 norandommap=0 00:10:37.067 numjobs=1 00:10:37.067 00:10:37.067 verify_dump=1 00:10:37.067 verify_backlog=512 00:10:37.067 verify_state_save=0 00:10:37.067 do_verify=1 00:10:37.067 verify=crc32c-intel 00:10:37.067 [job0] 00:10:37.067 filename=/dev/nvme0n1 00:10:37.067 [job1] 00:10:37.067 filename=/dev/nvme0n2 00:10:37.067 [job2] 00:10:37.067 filename=/dev/nvme0n3 00:10:37.067 [job3] 00:10:37.067 filename=/dev/nvme0n4 00:10:37.067 Could not set queue depth (nvme0n1) 00:10:37.067 Could not set queue depth (nvme0n2) 00:10:37.067 Could not set queue depth (nvme0n3) 00:10:37.067 Could not set queue depth (nvme0n4) 00:10:37.329 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.329 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.329 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.329 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.329 fio-3.35 00:10:37.329 Starting 4 threads 00:10:38.705 00:10:38.705 job0: (groupid=0, jobs=1): err= 0: pid=856619: Sun Dec 15 06:00:58 2024 00:10:38.705 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:10:38.705 slat (nsec): min=10014, max=23482, avg=21521.23, stdev=2609.09 00:10:38.705 clat (usec): min=40861, max=41198, avg=40978.74, stdev=81.63 00:10:38.705 lat (usec): min=40885, max=41208, avg=41000.27, stdev=79.98 00:10:38.705 clat percentiles (usec): 00:10:38.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:38.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:38.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:38.706 | 99.99th=[41157] 00:10:38.706 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:38.706 slat (nsec): min=9467, max=51919, avg=11146.72, stdev=2608.94 00:10:38.706 clat (usec): min=136, max=468, avg=200.93, stdev=29.95 00:10:38.706 lat (usec): min=147, max=479, avg=212.08, stdev=29.80 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 180], 00:10:38.706 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:38.706 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 249], 00:10:38.706 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 469], 99.95th=[ 469], 00:10:38.706 | 99.99th=[ 469] 00:10:38.706 bw ( KiB/s): min= 4087, max= 4087, per=20.49%, avg=4087.00, stdev= 0.00, samples=1 00:10:38.706 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:38.706 lat (usec) : 250=91.95%, 500=3.93% 00:10:38.706 lat (msec) : 50=4.12% 00:10:38.706 cpu : usr=0.49%, sys=0.89%, ctx=534, majf=0, minf=1 00:10:38.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.706 job1: (groupid=0, jobs=1): err= 0: pid=856632: Sun Dec 15 06:00:58 2024 00:10:38.706 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:10:38.706 slat (nsec): min=17284, max=23636, avg=22787.35, stdev=1241.98 00:10:38.706 clat (usec): min=4712, max=41971, avg=39582.15, stdev=7610.72 00:10:38.706 lat (usec): min=4735, max=41994, avg=39604.93, stdev=7610.59 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 4686], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:38.706 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:38.706 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:38.706 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:38.706 | 99.99th=[42206] 00:10:38.706 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:38.706 slat (nsec): min=9000, max=36033, avg=10206.09, stdev=1799.65 00:10:38.706 clat (usec): min=128, max=4057, avg=167.19, stdev=175.26 00:10:38.706 lat (usec): min=137, max=4068, avg=177.40, stdev=175.34 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:38.706 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 159], 00:10:38.706 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 186], 95.00th=[ 206], 00:10:38.706 | 99.00th=[ 239], 99.50th=[ 285], 99.90th=[ 4047], 99.95th=[ 4047], 00:10:38.706 | 99.99th=[ 4047] 00:10:38.706 bw ( KiB/s): min= 4087, max= 4087, per=20.49%, avg=4087.00, stdev= 0.00, samples=1 00:10:38.706 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:38.706 lat (usec) : 250=95.14%, 500=0.19%, 750=0.19% 00:10:38.706 lat (msec) : 10=0.37%, 50=4.11% 00:10:38.706 cpu : usr=0.10%, sys=0.60%, ctx=536, majf=0, minf=1 00:10:38.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.706 job2: (groupid=0, jobs=1): err= 0: pid=856649: Sun Dec 15 06:00:58 2024 00:10:38.706 read: IOPS=1351, BW=5406KiB/s (5536kB/s)(5552KiB/1027msec) 00:10:38.706 slat (nsec): min=7609, max=27258, avg=8955.65, stdev=1800.22 00:10:38.706 clat (usec): min=201, max=41157, avg=527.26, stdev=3271.14 00:10:38.706 lat (usec): min=211, max=41175, avg=536.21, stdev=3272.26 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:10:38.706 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:38.706 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 457], 00:10:38.706 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:38.706 | 99.99th=[41157] 00:10:38.706 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:10:38.706 slat (nsec): min=10648, max=38672, avg=11966.89, stdev=1778.02 00:10:38.706 clat (usec): min=120, max=282, avg=164.91, stdev=32.43 00:10:38.706 lat (usec): min=131, max=321, avg=176.88, stdev=32.97 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:38.706 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:10:38.706 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 221], 95.00th=[ 239], 00:10:38.706 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 285], 00:10:38.706 | 99.99th=[ 285] 00:10:38.706 bw ( KiB/s): min= 3272, max= 8998, per=30.76%, avg=6135.00, stdev=4048.89, samples=2 00:10:38.706 iops : min= 818, max= 2249, avg=1533.50, stdev=1011.87, samples=2 00:10:38.706 lat (usec) : 250=78.90%, 500=20.62%, 750=0.17% 00:10:38.706 lat (msec) : 50=0.31% 00:10:38.706 cpu : usr=3.22%, sys=3.90%, ctx=2925, majf=0, minf=1 00:10:38.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 issued rwts: total=1388,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.706 job3: (groupid=0, jobs=1): err= 0: pid=856655: Sun Dec 15 06:00:58 2024 00:10:38.706 read: IOPS=2293, BW=9175KiB/s (9395kB/s)(9184KiB/1001msec) 00:10:38.706 slat (nsec): min=6959, max=25938, avg=8121.72, stdev=1168.43 00:10:38.706 clat (usec): min=174, max=490, avg=235.04, stdev=29.37 00:10:38.706 lat (usec): min=183, max=498, avg=243.16, stdev=29.44 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 210], 00:10:38.706 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:10:38.706 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 285], 00:10:38.706 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 474], 99.95th=[ 486], 00:10:38.706 | 99.99th=[ 490] 00:10:38.706 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:38.706 slat (nsec): min=9554, max=51975, avg=10870.41, stdev=2031.81 00:10:38.706 clat (usec): min=119, max=300, avg=156.19, stdev=31.03 00:10:38.706 lat (usec): min=129, max=318, avg=167.06, stdev=31.59 00:10:38.706 clat percentiles (usec): 00:10:38.706 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:10:38.706 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:10:38.706 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 208], 95.00th=[ 235], 00:10:38.706 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 269], 00:10:38.706 | 99.99th=[ 302] 00:10:38.706 bw ( KiB/s): min=10035, max=10035, per=50.32%, avg=10035.00, stdev= 0.00, samples=1 00:10:38.706 iops : min= 2508, max= 2508, avg=2508.00, stdev= 0.00, samples=1 00:10:38.706 lat (usec) : 250=87.66%, 500=12.34% 00:10:38.706 cpu : usr=3.40%, sys=8.10%, ctx=4856, majf=0, minf=2 00:10:38.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:38.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.706 issued rwts: total=2296,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:38.706 00:10:38.706 Run status group 0 (all jobs): 00:10:38.706 READ: bw=14.2MiB/s (14.9MB/s), 86.9KiB/s-9175KiB/s (89.0kB/s-9395kB/s), io=14.6MiB (15.3MB), run=1001-1027msec 00:10:38.706 WRITE: bw=19.5MiB/s (20.4MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1027msec 00:10:38.706 00:10:38.706 Disk stats (read/write): 00:10:38.706 nvme0n1: ios=68/512, merge=0/0, ticks=749/97, in_queue=846, util=86.57% 00:10:38.706 nvme0n2: ios=42/512, merge=0/0, ticks=1729/85, in_queue=1814, util=98.68% 00:10:38.706 nvme0n3: ios=1408/1536, merge=0/0, ticks=1496/227, in_queue=1723, util=98.44% 00:10:38.706 nvme0n4: ios=2041/2048, merge=0/0, ticks=567/308, in_queue=875, util=95.38% 00:10:38.706 06:00:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:38.706 [global] 00:10:38.706 thread=1 00:10:38.706 invalidate=1 00:10:38.706 rw=write 00:10:38.706 time_based=1 00:10:38.706 runtime=1 00:10:38.706 ioengine=libaio 00:10:38.706 direct=1 00:10:38.706 bs=4096 00:10:38.706 iodepth=128 00:10:38.706 norandommap=0 00:10:38.706 numjobs=1 00:10:38.706 00:10:38.706 verify_dump=1 00:10:38.706 verify_backlog=512 00:10:38.706 verify_state_save=0 00:10:38.706 do_verify=1 00:10:38.706 verify=crc32c-intel 00:10:38.706 [job0] 00:10:38.706 filename=/dev/nvme0n1 00:10:38.706 [job1] 00:10:38.706 filename=/dev/nvme0n2 00:10:38.706 [job2] 00:10:38.706 filename=/dev/nvme0n3 00:10:38.706 [job3] 00:10:38.706 filename=/dev/nvme0n4 00:10:38.706 Could not set queue depth (nvme0n1) 00:10:38.706 Could not set queue depth (nvme0n2) 00:10:38.706 Could not set queue depth (nvme0n3) 00:10:38.706 Could not set queue depth (nvme0n4) 00:10:38.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.706 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.706 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.706 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.706 fio-3.35 00:10:38.706 Starting 4 threads 00:10:40.085 00:10:40.085 job0: (groupid=0, jobs=1): err= 0: pid=857092: Sun Dec 15 06:01:00 2024 00:10:40.085 read: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1008msec) 00:10:40.085 slat (nsec): min=1351, max=18812k, avg=104191.96, stdev=715850.02 00:10:40.085 clat (usec): min=2726, max=65830, avg=13507.38, stdev=6844.29 00:10:40.085 lat (usec): min=6071, max=67884, avg=13611.58, stdev=6896.85 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 6849], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:40.085 | 30.00th=[10159], 40.00th=[10290], 50.00th=[11076], 60.00th=[11600], 00:10:40.085 | 70.00th=[13566], 80.00th=[14091], 90.00th=[21365], 95.00th=[29492], 00:10:40.085 | 99.00th=[38536], 99.50th=[47449], 99.90th=[65799], 99.95th=[65799], 00:10:40.085 | 99.99th=[65799] 00:10:40.085 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:10:40.085 slat (usec): min=2, max=41653, avg=173.49, stdev=1098.67 00:10:40.085 clat (msec): min=5, max=104, avg=20.70, stdev=19.66 00:10:40.085 lat (msec): min=5, max=104, avg=20.87, stdev=19.80 00:10:40.085 clat percentiles (msec): 00:10:40.085 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:10:40.085 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:10:40.085 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 50], 95.00th=[ 73], 00:10:40.085 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 105], 00:10:40.085 | 99.99th=[ 105] 00:10:40.085 bw ( KiB/s): min= 9608, max=19025, per=21.45%, avg=14316.50, stdev=6658.82, samples=2 00:10:40.085 iops : min= 2402, max= 4756, avg=3579.00, stdev=1664.53, samples=2 00:10:40.085 lat (msec) : 4=0.01%, 10=26.21%, 20=50.82%, 50=17.95%, 100=4.90% 00:10:40.085 lat (msec) : 250=0.10% 00:10:40.085 cpu : usr=2.58%, sys=4.97%, ctx=429, majf=0, minf=1 00:10:40.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:40.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.085 issued rwts: total=3456,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.085 job1: (groupid=0, jobs=1): err= 0: pid=857112: Sun Dec 15 06:01:00 2024 00:10:40.085 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:10:40.085 slat (nsec): min=1099, max=15795k, avg=122756.98, stdev=856441.02 00:10:40.085 clat (usec): min=5111, max=65040, avg=13563.79, stdev=7628.91 00:10:40.085 lat (usec): min=5120, max=65050, avg=13686.54, stdev=7725.09 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 6194], 5.00th=[ 7635], 10.00th=[ 9110], 20.00th=[ 9503], 00:10:40.085 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[12256], 00:10:40.085 | 70.00th=[12649], 80.00th=[17171], 90.00th=[19268], 95.00th=[28705], 00:10:40.085 | 99.00th=[51119], 99.50th=[52167], 99.90th=[65274], 99.95th=[65274], 00:10:40.085 | 99.99th=[65274] 00:10:40.085 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:10:40.085 slat (usec): min=2, max=8664, avg=127.09, stdev=582.89 00:10:40.085 clat (usec): min=2580, max=69149, avg=19336.22, stdev=12260.85 00:10:40.085 lat (usec): min=2587, max=69160, avg=19463.30, stdev=12330.83 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 4621], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8356], 00:10:40.085 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[19268], 60.00th=[22676], 00:10:40.085 | 70.00th=[24511], 80.00th=[26870], 90.00th=[32900], 95.00th=[38536], 00:10:40.085 | 99.00th=[65799], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:10:40.085 | 99.99th=[68682] 00:10:40.085 bw ( KiB/s): min=14024, max=17768, per=23.81%, avg=15896.00, stdev=2647.41, samples=2 00:10:40.085 iops : min= 3506, max= 4442, avg=3974.00, stdev=661.85, samples=2 00:10:40.085 lat (msec) : 4=0.16%, 10=35.35%, 20=33.57%, 50=28.88%, 100=2.04% 00:10:40.085 cpu : usr=3.36%, sys=4.35%, ctx=439, majf=0, minf=1 00:10:40.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.085 issued rwts: total=3590,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.085 job2: (groupid=0, jobs=1): err= 0: pid=857132: Sun Dec 15 06:01:00 2024 00:10:40.085 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:40.085 slat (nsec): min=1057, max=22037k, avg=104899.52, stdev=750102.28 00:10:40.085 clat (usec): min=7278, max=56003, avg=13528.56, stdev=9204.56 00:10:40.085 lat (usec): min=7655, max=56009, avg=13633.46, stdev=9243.66 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9634], 00:10:40.085 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:10:40.085 | 70.00th=[10945], 80.00th=[13173], 90.00th=[21365], 95.00th=[36963], 00:10:40.085 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:10:40.085 | 99.99th=[55837] 00:10:40.085 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:10:40.085 slat (nsec): min=1920, max=16764k, avg=96954.72, stdev=618277.03 00:10:40.085 clat (usec): min=245, max=53978, avg=12072.01, stdev=6683.90 00:10:40.085 lat (usec): min=2446, max=53987, avg=12168.96, stdev=6720.62 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 5211], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 9241], 00:10:40.085 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:10:40.085 | 70.00th=[11338], 80.00th=[12518], 90.00th=[16712], 95.00th=[18744], 00:10:40.085 | 99.00th=[49546], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:10:40.085 | 99.99th=[53740] 00:10:40.085 bw ( KiB/s): min=12632, max=27104, per=29.76%, avg=19868.00, stdev=10233.25, samples=2 00:10:40.085 iops : min= 3158, max= 6776, avg=4967.00, stdev=2558.31, samples=2 00:10:40.085 lat (usec) : 250=0.01% 00:10:40.085 lat (msec) : 4=0.37%, 10=42.88%, 20=48.28%, 50=7.16%, 100=1.29% 00:10:40.085 cpu : usr=2.20%, sys=4.79%, ctx=496, majf=0, minf=1 00:10:40.085 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:40.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.085 issued rwts: total=4608,5095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.085 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.085 job3: (groupid=0, jobs=1): err= 0: pid=857138: Sun Dec 15 06:01:00 2024 00:10:40.085 read: IOPS=3801, BW=14.8MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:10:40.085 slat (nsec): min=1238, max=12144k, avg=110280.76, stdev=703298.55 00:10:40.085 clat (usec): min=3347, max=42887, avg=13405.52, stdev=5464.53 00:10:40.085 lat (usec): min=5891, max=42896, avg=13515.80, stdev=5522.33 00:10:40.085 clat percentiles (usec): 00:10:40.085 | 1.00th=[ 5997], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[10552], 00:10:40.085 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12518], 00:10:40.086 | 70.00th=[13698], 80.00th=[15664], 90.00th=[17171], 95.00th=[24249], 00:10:40.086 | 99.00th=[38011], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:10:40.086 | 99.99th=[42730] 00:10:40.086 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:40.086 slat (usec): min=2, max=8270, avg=133.07, stdev=572.02 00:10:40.086 clat (usec): min=2151, max=79197, avg=18622.40, stdev=13322.99 00:10:40.086 lat (usec): min=2158, max=79205, avg=18755.46, stdev=13400.30 00:10:40.086 clat percentiles (usec): 00:10:40.086 | 1.00th=[ 5014], 5.00th=[ 6849], 10.00th=[ 8455], 20.00th=[10159], 00:10:40.086 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[15139], 00:10:40.086 | 70.00th=[23200], 80.00th=[24773], 90.00th=[34341], 95.00th=[49021], 00:10:40.086 | 99.00th=[76022], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:10:40.086 | 99.99th=[79168] 00:10:40.086 bw ( KiB/s): min=10768, max=22000, per=24.55%, avg=16384.00, stdev=7942.22, samples=2 00:10:40.086 iops : min= 2692, max= 5500, avg=4096.00, stdev=1985.56, samples=2 00:10:40.086 lat (msec) : 4=0.14%, 10=18.19%, 20=60.71%, 50=18.50%, 100=2.46% 00:10:40.086 cpu : usr=2.39%, sys=4.37%, ctx=536, majf=0, minf=2 00:10:40.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.086 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.086 00:10:40.086 Run status group 0 (all jobs): 00:10:40.086 READ: bw=59.8MiB/s (62.7MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=60.5MiB (63.4MB), run=1003-1011msec 00:10:40.086 WRITE: bw=65.2MiB/s (68.4MB/s), 13.9MiB/s-19.8MiB/s (14.6MB/s-20.8MB/s), io=65.9MiB (69.1MB), run=1003-1011msec 00:10:40.086 00:10:40.086 Disk stats (read/write): 00:10:40.086 nvme0n1: ios=2900/3072, merge=0/0, ticks=17594/30463, in_queue=48057, util=97.70% 00:10:40.086 nvme0n2: ios=3096/3495, merge=0/0, ticks=40769/59702, in_queue=100471, util=97.56% 00:10:40.086 nvme0n3: ios=3682/4096, merge=0/0, ticks=14062/12505, in_queue=26567, util=97.60% 00:10:40.086 nvme0n4: ios=3463/3584, merge=0/0, ticks=35945/47539, in_queue=83484, util=90.31% 00:10:40.086 06:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:40.086 [global] 00:10:40.086 thread=1 00:10:40.086 invalidate=1 00:10:40.086 rw=randwrite 00:10:40.086 time_based=1 00:10:40.086 runtime=1 00:10:40.086 ioengine=libaio 00:10:40.086 direct=1 00:10:40.086 bs=4096 00:10:40.086 iodepth=128 00:10:40.086 norandommap=0 00:10:40.086 numjobs=1 00:10:40.086 00:10:40.086 verify_dump=1 00:10:40.086 verify_backlog=512 00:10:40.086 verify_state_save=0 00:10:40.086 do_verify=1 00:10:40.086 verify=crc32c-intel 00:10:40.086 [job0] 00:10:40.086 filename=/dev/nvme0n1 00:10:40.086 [job1] 00:10:40.086 filename=/dev/nvme0n2 00:10:40.086 [job2] 00:10:40.086 filename=/dev/nvme0n3 00:10:40.086 [job3] 00:10:40.086 filename=/dev/nvme0n4 00:10:40.086 Could not set queue depth (nvme0n1) 00:10:40.086 Could not set queue depth (nvme0n2) 00:10:40.086 Could not set queue depth (nvme0n3) 00:10:40.086 Could not set queue depth (nvme0n4) 00:10:40.345 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.345 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.345 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.345 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:40.345 fio-3.35 00:10:40.345 Starting 4 threads 00:10:41.724 00:10:41.724 job0: (groupid=0, jobs=1): err= 0: pid=857520: Sun Dec 15 06:01:01 2024 00:10:41.724 read: IOPS=4750, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1004msec) 00:10:41.724 slat (nsec): min=1071, max=21513k, avg=97680.39, stdev=685420.23 00:10:41.724 clat (usec): min=1266, max=59001, avg=12168.17, stdev=6925.40 00:10:41.724 lat (usec): min=3683, max=59019, avg=12265.85, stdev=6978.29 00:10:41.724 clat percentiles (usec): 00:10:41.724 | 1.00th=[ 5211], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9503], 00:10:41.724 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10683], 60.00th=[11076], 00:10:41.724 | 70.00th=[11731], 80.00th=[12125], 90.00th=[14484], 95.00th=[20055], 00:10:41.724 | 99.00th=[49546], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:41.724 | 99.99th=[58983] 00:10:41.724 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:41.724 slat (nsec): min=1815, max=22720k, avg=97771.86, stdev=681878.94 00:10:41.724 clat (usec): min=2240, max=70037, avg=13430.45, stdev=9109.77 00:10:41.724 lat (usec): min=2247, max=70067, avg=13528.22, stdev=9179.50 00:10:41.724 clat percentiles (usec): 00:10:41.724 | 1.00th=[ 6587], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9634], 00:10:41.724 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10945], 00:10:41.724 | 70.00th=[12125], 80.00th=[12911], 90.00th=[16909], 95.00th=[35914], 00:10:41.724 | 99.00th=[56886], 99.50th=[58459], 99.90th=[58459], 99.95th=[61080], 00:10:41.724 | 99.99th=[69731] 00:10:41.724 bw ( KiB/s): min=16384, max=24576, per=29.12%, avg=20480.00, stdev=5792.62, samples=2 00:10:41.724 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:41.724 lat (msec) : 2=0.01%, 4=0.44%, 10=36.57%, 20=55.60%, 50=6.05% 00:10:41.724 lat (msec) : 100=1.33% 00:10:41.724 cpu : usr=2.59%, sys=5.78%, ctx=518, majf=0, minf=1 00:10:41.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:41.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.724 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.724 job1: (groupid=0, jobs=1): err= 0: pid=857521: Sun Dec 15 06:01:01 2024 00:10:41.724 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:41.724 slat (nsec): min=1416, max=18485k, avg=145040.26, stdev=904049.94 00:10:41.724 clat (usec): min=7365, max=74851, avg=17598.05, stdev=11543.17 00:10:41.724 lat (usec): min=7369, max=74859, avg=17743.09, stdev=11644.36 00:10:41.724 clat percentiles (usec): 00:10:41.724 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:10:41.724 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11731], 60.00th=[14353], 00:10:41.724 | 70.00th=[17171], 80.00th=[26346], 90.00th=[36439], 95.00th=[38536], 00:10:41.724 | 99.00th=[56361], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:41.724 | 99.99th=[74974] 00:10:41.724 write: IOPS=3300, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1004msec); 0 zone resets 00:10:41.724 slat (usec): min=2, max=19651, avg=161.66, stdev=993.62 00:10:41.724 clat (usec): min=476, max=90496, avg=21433.19, stdev=13807.76 00:10:41.724 lat (usec): min=6056, max=90520, avg=21594.85, stdev=13913.45 00:10:41.724 clat percentiles (usec): 00:10:41.725 | 1.00th=[ 6521], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:10:41.725 | 30.00th=[10945], 40.00th=[13173], 50.00th=[18220], 60.00th=[21890], 00:10:41.725 | 70.00th=[25822], 80.00th=[27395], 90.00th=[38536], 95.00th=[48497], 00:10:41.725 | 99.00th=[72877], 99.50th=[78119], 99.90th=[78119], 99.95th=[79168], 00:10:41.725 | 99.99th=[90702] 00:10:41.725 bw ( KiB/s): min=11168, max=14320, per=18.12%, avg=12744.00, stdev=2228.80, samples=2 00:10:41.725 iops : min= 2792, max= 3580, avg=3186.00, stdev=557.20, samples=2 00:10:41.725 lat (usec) : 500=0.02% 00:10:41.725 lat (msec) : 10=19.15%, 20=44.03%, 50=33.71%, 100=3.08% 00:10:41.725 cpu : usr=2.59%, sys=4.19%, ctx=382, majf=0, minf=1 00:10:41.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:41.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.725 issued rwts: total=3072,3314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.725 job2: (groupid=0, jobs=1): err= 0: pid=857524: Sun Dec 15 06:01:01 2024 00:10:41.725 read: IOPS=4549, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1004msec) 00:10:41.725 slat (nsec): min=1106, max=19326k, avg=110117.18, stdev=729747.59 00:10:41.725 clat (usec): min=548, max=31039, avg=14538.43, stdev=4261.13 00:10:41.725 lat (usec): min=3951, max=31046, avg=14648.55, stdev=4296.15 00:10:41.725 clat percentiles (usec): 00:10:41.725 | 1.00th=[ 4490], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[11207], 00:10:41.725 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14091], 60.00th=[14877], 00:10:41.725 | 70.00th=[16188], 80.00th=[17433], 90.00th=[19530], 95.00th=[20841], 00:10:41.725 | 99.00th=[27657], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:10:41.725 | 99.99th=[31065] 00:10:41.725 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:41.725 slat (usec): min=2, max=10924, avg=96.58, stdev=672.81 00:10:41.725 clat (usec): min=656, max=32921, avg=13237.42, stdev=4582.55 00:10:41.725 lat (usec): min=662, max=32930, avg=13334.00, stdev=4624.96 00:10:41.725 clat percentiles (usec): 00:10:41.725 | 1.00th=[ 1303], 5.00th=[ 5866], 10.00th=[ 8094], 20.00th=[10028], 00:10:41.725 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13304], 60.00th=[13960], 00:10:41.725 | 70.00th=[14615], 80.00th=[16319], 90.00th=[20055], 95.00th=[21103], 00:10:41.725 | 99.00th=[25560], 99.50th=[28443], 99.90th=[32900], 99.95th=[32900], 00:10:41.725 | 99.99th=[32900] 00:10:41.725 bw ( KiB/s): min=16384, max=20480, per=26.21%, avg=18432.00, stdev=2896.31, samples=2 00:10:41.725 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:41.725 lat (usec) : 750=0.12% 00:10:41.725 lat (msec) : 2=0.52%, 4=1.23%, 10=14.51%, 20=75.39%, 50=8.23% 00:10:41.725 cpu : usr=2.69%, sys=5.68%, ctx=327, majf=0, minf=1 00:10:41.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:41.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.725 issued rwts: total=4568,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.725 job3: (groupid=0, jobs=1): err= 0: pid=857525: Sun Dec 15 06:01:01 2024 00:10:41.725 read: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1002msec) 00:10:41.725 slat (nsec): min=1070, max=8954.8k, avg=101166.06, stdev=582641.61 00:10:41.725 clat (usec): min=473, max=89652, avg=13441.06, stdev=5593.59 00:10:41.725 lat (usec): min=3356, max=89657, avg=13542.22, stdev=5606.37 00:10:41.725 clat percentiles (usec): 00:10:41.725 | 1.00th=[ 6128], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11469], 00:10:41.725 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13435], 00:10:41.725 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15533], 95.00th=[17171], 00:10:41.725 | 99.00th=[32900], 99.50th=[39060], 99.90th=[89654], 99.95th=[89654], 00:10:41.725 | 99.99th=[89654] 00:10:41.725 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:41.725 slat (nsec): min=1790, max=10202k, avg=117086.56, stdev=643472.01 00:10:41.725 clat (usec): min=4275, max=51188, avg=14823.52, stdev=6883.10 00:10:41.725 lat (usec): min=4283, max=51191, avg=14940.60, stdev=6919.66 00:10:41.725 clat percentiles (usec): 00:10:41.725 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11469], 00:10:41.725 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13173], 60.00th=[13566], 00:10:41.725 | 70.00th=[14222], 80.00th=[16188], 90.00th=[19268], 95.00th=[28705], 00:10:41.725 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:10:41.725 | 99.99th=[51119] 00:10:41.725 bw ( KiB/s): min=17968, max=18896, per=26.21%, avg=18432.00, stdev=656.20, samples=2 00:10:41.725 iops : min= 4492, max= 4724, avg=4608.00, stdev=164.05, samples=2 00:10:41.725 lat (usec) : 500=0.01% 00:10:41.725 lat (msec) : 4=0.36%, 10=6.30%, 20=87.08%, 50=5.82%, 100=0.42% 00:10:41.725 cpu : usr=3.20%, sys=4.10%, ctx=451, majf=0, minf=1 00:10:41.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:41.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.725 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.725 00:10:41.725 Run status group 0 (all jobs): 00:10:41.725 READ: bw=65.2MiB/s (68.3MB/s), 12.0MiB/s-18.6MiB/s (12.5MB/s-19.5MB/s), io=65.4MiB (68.6MB), run=1002-1004msec 00:10:41.725 WRITE: bw=68.7MiB/s (72.0MB/s), 12.9MiB/s-19.9MiB/s (13.5MB/s-20.9MB/s), io=68.9MiB (72.3MB), run=1002-1004msec 00:10:41.725 00:10:41.725 Disk stats (read/write): 00:10:41.725 nvme0n1: ios=3859/4096, merge=0/0, ticks=18660/18317, in_queue=36977, util=99.60% 00:10:41.725 nvme0n2: ios=2099/2535, merge=0/0, ticks=14377/19358, in_queue=33735, util=98.05% 00:10:41.725 nvme0n3: ios=3589/3847, merge=0/0, ticks=29246/27743, in_queue=56989, util=90.48% 00:10:41.725 nvme0n4: ios=3584/3987, merge=0/0, ticks=21463/20358, in_queue=41821, util=88.53% 00:10:41.725 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:41.725 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=857741 00:10:41.725 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:41.725 06:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:41.725 [global] 00:10:41.725 thread=1 00:10:41.725 invalidate=1 00:10:41.725 rw=read 00:10:41.725 time_based=1 00:10:41.725 runtime=10 00:10:41.725 ioengine=libaio 00:10:41.725 direct=1 00:10:41.725 bs=4096 00:10:41.725 iodepth=1 00:10:41.725 norandommap=1 00:10:41.725 numjobs=1 00:10:41.725 00:10:41.725 [job0] 00:10:41.725 filename=/dev/nvme0n1 00:10:41.725 [job1] 00:10:41.725 filename=/dev/nvme0n2 00:10:41.725 [job2] 00:10:41.725 filename=/dev/nvme0n3 00:10:41.725 [job3] 00:10:41.725 filename=/dev/nvme0n4 00:10:41.725 Could not set queue depth (nvme0n1) 00:10:41.725 Could not set queue depth (nvme0n2) 00:10:41.725 Could not set queue depth (nvme0n3) 00:10:41.725 Could not set queue depth (nvme0n4) 00:10:41.984 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.984 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.984 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.984 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.984 fio-3.35 00:10:41.984 Starting 4 threads 00:10:45.277 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:45.277 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47382528, buflen=4096 00:10:45.277 fio: pid=857889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.277 06:01:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:45.277 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=50020352, buflen=4096 00:10:45.277 fio: pid=857888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.277 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.277 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:45.277 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.277 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:45.278 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5406720, buflen=4096 00:10:45.278 fio: pid=857886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.539 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.539 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:45.539 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50733056, buflen=4096 00:10:45.539 fio: pid=857887, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:45.539 00:10:45.539 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857886: Sun Dec 15 06:01:05 2024 00:10:45.539 read: IOPS=417, BW=1667KiB/s (1707kB/s)(5280KiB/3167msec) 00:10:45.539 slat (usec): min=3, max=12685, avg=23.07, stdev=409.43 00:10:45.539 clat (usec): min=159, max=42210, avg=2357.90, stdev=9020.82 00:10:45.539 lat (usec): min=163, max=42223, avg=2380.97, stdev=9027.95 00:10:45.539 clat percentiles (usec): 00:10:45.539 | 1.00th=[ 176], 5.00th=[ 196], 10.00th=[ 212], 20.00th=[ 227], 00:10:45.539 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 269], 00:10:45.539 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[40633], 00:10:45.539 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:45.539 | 99.99th=[42206] 00:10:45.539 bw ( KiB/s): min= 96, max= 5887, per=3.50%, avg=1558.50, stdev=2431.41, samples=6 00:10:45.539 iops : min= 24, max= 1471, avg=389.50, stdev=607.59, samples=6 00:10:45.539 lat (usec) : 250=36.94%, 500=57.76% 00:10:45.539 lat (msec) : 2=0.08%, 50=5.15% 00:10:45.539 cpu : usr=0.38%, sys=0.41%, ctx=1324, majf=0, minf=1 00:10:45.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 issued rwts: total=1321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.539 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857887: Sun Dec 15 06:01:05 2024 00:10:45.539 read: IOPS=3673, BW=14.3MiB/s (15.0MB/s)(48.4MiB/3372msec) 00:10:45.539 slat (usec): min=6, max=15710, avg=10.40, stdev=204.76 00:10:45.539 clat (usec): min=174, max=42432, avg=258.94, stdev=1282.30 00:10:45.539 lat (usec): min=182, max=47906, avg=269.34, stdev=1315.85 00:10:45.539 clat percentiles (usec): 00:10:45.539 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:45.539 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:10:45.539 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:10:45.539 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 404], 99.95th=[41681], 00:10:45.539 | 99.99th=[42206] 00:10:45.539 bw ( KiB/s): min= 9322, max=17672, per=36.38%, avg=16179.00, stdev=3362.63, samples=6 00:10:45.539 iops : min= 2330, max= 4418, avg=4044.67, stdev=840.86, samples=6 00:10:45.539 lat (usec) : 250=95.87%, 500=4.03% 00:10:45.539 lat (msec) : 50=0.10% 00:10:45.539 cpu : usr=0.77%, sys=3.59%, ctx=12391, majf=0, minf=2 00:10:45.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 issued rwts: total=12387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.539 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857888: Sun Dec 15 06:01:05 2024 00:10:45.539 read: IOPS=4155, BW=16.2MiB/s (17.0MB/s)(47.7MiB/2939msec) 00:10:45.539 slat (usec): min=5, max=12118, avg= 8.60, stdev=124.67 00:10:45.539 clat (usec): min=171, max=1110, avg=229.21, stdev=22.11 00:10:45.539 lat (usec): min=178, max=12531, avg=237.81, stdev=129.06 00:10:45.539 clat percentiles (usec): 00:10:45.539 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:10:45.539 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:10:45.539 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 269], 00:10:45.539 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 379], 00:10:45.539 | 99.99th=[ 469] 00:10:45.539 bw ( KiB/s): min=16408, max=17152, per=38.14%, avg=16958.40, stdev=313.96, samples=5 00:10:45.539 iops : min= 4102, max= 4288, avg=4239.60, stdev=78.49, samples=5 00:10:45.539 lat (usec) : 250=87.84%, 500=12.14% 00:10:45.539 lat (msec) : 2=0.01% 00:10:45.539 cpu : usr=0.99%, sys=3.74%, ctx=12215, majf=0, minf=2 00:10:45.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 issued rwts: total=12213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.539 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857889: Sun Dec 15 06:01:05 2024 00:10:45.539 read: IOPS=4262, BW=16.6MiB/s (17.5MB/s)(45.2MiB/2714msec) 00:10:45.539 slat (nsec): min=5818, max=35889, avg=8491.47, stdev=1239.32 00:10:45.539 clat (usec): min=182, max=443, avg=223.88, stdev=16.09 00:10:45.539 lat (usec): min=190, max=479, avg=232.37, stdev=16.14 00:10:45.539 clat percentiles (usec): 00:10:45.539 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:10:45.539 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:45.539 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:10:45.539 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 314], 99.95th=[ 318], 00:10:45.539 | 99.99th=[ 424] 00:10:45.539 bw ( KiB/s): min=17000, max=17640, per=38.63%, avg=17177.60, stdev=276.57, samples=5 00:10:45.539 iops : min= 4250, max= 4410, avg=4294.40, stdev=69.14, samples=5 00:10:45.539 lat (usec) : 250=94.51%, 500=5.48% 00:10:45.539 cpu : usr=1.22%, sys=4.53%, ctx=11569, majf=0, minf=2 00:10:45.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.539 issued rwts: total=11569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.539 00:10:45.539 Run status group 0 (all jobs): 00:10:45.539 READ: bw=43.4MiB/s (45.5MB/s), 1667KiB/s-16.6MiB/s (1707kB/s-17.5MB/s), io=146MiB (154MB), run=2714-3372msec 00:10:45.539 00:10:45.539 Disk stats (read/write): 00:10:45.539 nvme0n1: ios=1319/0, merge=0/0, ticks=3054/0, in_queue=3054, util=95.13% 00:10:45.539 nvme0n2: ios=12386/0, merge=0/0, ticks=3143/0, in_queue=3143, util=95.35% 00:10:45.539 nvme0n3: ios=11926/0, merge=0/0, ticks=2680/0, in_queue=2680, util=95.88% 00:10:45.539 nvme0n4: ios=11167/0, merge=0/0, ticks=2428/0, in_queue=2428, util=96.41% 00:10:45.798 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.798 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:45.798 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.799 06:01:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:46.057 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.057 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:46.317 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.317 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 857741 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:46.577 nvmf hotplug test: fio failed as expected 00:10:46.577 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.836 rmmod nvme_tcp 00:10:46.836 rmmod nvme_fabrics 00:10:46.836 rmmod nvme_keyring 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 854881 ']' 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 854881 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 854881 ']' 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 854881 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.836 06:01:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854881 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854881' 00:10:47.096 killing process with pid 854881 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 854881 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 854881 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.096 06:01:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.636 00:10:49.636 real 0m26.843s 00:10:49.636 user 1m47.223s 00:10:49.636 sys 0m8.865s 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.636 ************************************ 00:10:49.636 END TEST nvmf_fio_target 00:10:49.636 ************************************ 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.636 ************************************ 00:10:49.636 START TEST nvmf_bdevio 00:10:49.636 ************************************ 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:49.636 * Looking for test storage... 00:10:49.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.636 --rc genhtml_branch_coverage=1 00:10:49.636 --rc genhtml_function_coverage=1 00:10:49.636 --rc genhtml_legend=1 00:10:49.636 --rc geninfo_all_blocks=1 00:10:49.636 --rc geninfo_unexecuted_blocks=1 00:10:49.636 00:10:49.636 ' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.636 --rc genhtml_branch_coverage=1 00:10:49.636 --rc genhtml_function_coverage=1 00:10:49.636 --rc genhtml_legend=1 00:10:49.636 --rc geninfo_all_blocks=1 00:10:49.636 --rc geninfo_unexecuted_blocks=1 00:10:49.636 00:10:49.636 ' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.636 --rc genhtml_branch_coverage=1 00:10:49.636 --rc genhtml_function_coverage=1 00:10:49.636 --rc genhtml_legend=1 00:10:49.636 --rc geninfo_all_blocks=1 00:10:49.636 --rc geninfo_unexecuted_blocks=1 00:10:49.636 00:10:49.636 ' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.636 --rc genhtml_branch_coverage=1 00:10:49.636 --rc genhtml_function_coverage=1 00:10:49.636 --rc genhtml_legend=1 00:10:49.636 --rc geninfo_all_blocks=1 00:10:49.636 --rc geninfo_unexecuted_blocks=1 00:10:49.636 00:10:49.636 ' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.636 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.637 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:56.214 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:56.214 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.214 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:56.215 Found net devices under 0000:af:00.0: cvl_0_0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:56.215 Found net devices under 0000:af:00.1: cvl_0_1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:56.215 00:10:56.215 --- 10.0.0.2 ping statistics --- 00:10:56.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.215 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:56.215 00:10:56.215 --- 10.0.0.1 ping statistics --- 00:10:56.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.215 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=862204 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 862204 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 862204 ']' 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 [2024-12-15 06:01:15.537974] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:56.215 [2024-12-15 06:01:15.538041] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.215 [2024-12-15 06:01:15.615835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.215 [2024-12-15 06:01:15.638754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.215 [2024-12-15 06:01:15.638794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.215 [2024-12-15 06:01:15.638801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.215 [2024-12-15 06:01:15.638807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.215 [2024-12-15 06:01:15.638812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.215 [2024-12-15 06:01:15.640345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.215 [2024-12-15 06:01:15.640455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:56.215 [2024-12-15 06:01:15.640562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.215 [2024-12-15 06:01:15.640563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.215 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 [2024-12-15 06:01:15.776482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.216 Malloc0 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.216 [2024-12-15 06:01:15.835375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.216 { 00:10:56.216 "params": { 00:10:56.216 "name": "Nvme$subsystem", 00:10:56.216 "trtype": "$TEST_TRANSPORT", 00:10:56.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.216 "adrfam": "ipv4", 00:10:56.216 "trsvcid": "$NVMF_PORT", 00:10:56.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.216 "hdgst": ${hdgst:-false}, 00:10:56.216 "ddgst": ${ddgst:-false} 00:10:56.216 }, 00:10:56.216 "method": "bdev_nvme_attach_controller" 00:10:56.216 } 00:10:56.216 EOF 00:10:56.216 )") 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:56.216 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.216 "params": { 00:10:56.216 "name": "Nvme1", 00:10:56.216 "trtype": "tcp", 00:10:56.216 "traddr": "10.0.0.2", 00:10:56.216 "adrfam": "ipv4", 00:10:56.216 "trsvcid": "4420", 00:10:56.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.216 "hdgst": false, 00:10:56.216 "ddgst": false 00:10:56.216 }, 00:10:56.216 "method": "bdev_nvme_attach_controller" 00:10:56.216 }' 00:10:56.216 [2024-12-15 06:01:15.885491] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:56.216 [2024-12-15 06:01:15.885533] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862319 ] 00:10:56.216 [2024-12-15 06:01:15.959784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.216 [2024-12-15 06:01:15.984713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.216 [2024-12-15 06:01:15.984823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.216 [2024-12-15 06:01:15.984822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.216 I/O targets: 00:10:56.216 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:56.216 00:10:56.216 00:10:56.216 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.216 http://cunit.sourceforge.net/ 00:10:56.216 00:10:56.216 00:10:56.216 Suite: bdevio tests on: Nvme1n1 00:10:56.216 Test: blockdev write read block ...passed 00:10:56.476 Test: blockdev write zeroes read block ...passed 00:10:56.476 Test: blockdev write zeroes read no split ...passed 00:10:56.476 Test: blockdev write zeroes read split ...passed 00:10:56.476 Test: blockdev write zeroes read split partial ...passed 00:10:56.476 Test: blockdev reset ...[2024-12-15 06:01:16.409605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:56.476 [2024-12-15 06:01:16.409665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1241340 (9): Bad file descriptor 00:10:56.476 [2024-12-15 06:01:16.463385] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:56.476 passed 00:10:56.476 Test: blockdev write read 8 blocks ...passed 00:10:56.476 Test: blockdev write read size > 128k ...passed 00:10:56.476 Test: blockdev write read invalid size ...passed 00:10:56.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.476 Test: blockdev write read max offset ...passed 00:10:56.735 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.735 Test: blockdev writev readv 8 blocks ...passed 00:10:56.735 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.735 Test: blockdev writev readv block ...passed 00:10:56.735 Test: blockdev writev readv size > 128k ...passed 00:10:56.735 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.735 Test: blockdev comparev and writev ...[2024-12-15 06:01:16.675756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.675805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.675813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.676614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.735 [2024-12-15 06:01:16.676620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.735 passed 00:10:56.735 Test: blockdev nvme passthru rw ...passed 00:10:56.735 Test: blockdev nvme passthru vendor specific ...[2024-12-15 06:01:16.758329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.735 [2024-12-15 06:01:16.758349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.758450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.735 [2024-12-15 06:01:16.758460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.758561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.735 [2024-12-15 06:01:16.758573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.735 [2024-12-15 06:01:16.758679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.735 [2024-12-15 06:01:16.758689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.735 passed 00:10:56.735 Test: blockdev nvme admin passthru ...passed 00:10:56.735 Test: blockdev copy ...passed 00:10:56.735 00:10:56.735 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.735 suites 1 1 n/a 0 0 00:10:56.735 tests 23 23 23 0 0 00:10:56.735 asserts 152 152 152 0 n/a 00:10:56.735 00:10:56.735 Elapsed time = 1.043 seconds 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.995 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.995 rmmod nvme_tcp 00:10:56.995 rmmod nvme_fabrics 00:10:56.995 rmmod nvme_keyring 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 862204 ']' 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 862204 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 862204 ']' 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 862204 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862204 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862204' 00:10:56.995 killing process with pid 862204 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 862204 00:10:56.995 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 862204 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.255 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.795 00:10:59.795 real 0m10.024s 00:10:59.795 user 0m10.449s 00:10:59.795 sys 0m4.988s 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.795 ************************************ 00:10:59.795 END TEST nvmf_bdevio 00:10:59.795 ************************************ 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:59.795 00:10:59.795 real 4m35.184s 00:10:59.795 user 10m21.260s 00:10:59.795 sys 1m37.072s 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.795 ************************************ 00:10:59.795 END TEST nvmf_target_core 00:10:59.795 ************************************ 00:10:59.795 06:01:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.795 06:01:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.795 06:01:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.795 06:01:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.795 ************************************ 00:10:59.795 START TEST nvmf_target_extra 00:10:59.795 ************************************ 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.795 * Looking for test storage... 00:10:59.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.795 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.796 --rc genhtml_branch_coverage=1 00:10:59.796 --rc genhtml_function_coverage=1 00:10:59.796 --rc genhtml_legend=1 00:10:59.796 --rc geninfo_all_blocks=1 00:10:59.796 --rc geninfo_unexecuted_blocks=1 00:10:59.796 00:10:59.796 ' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.796 --rc genhtml_branch_coverage=1 00:10:59.796 --rc genhtml_function_coverage=1 00:10:59.796 --rc genhtml_legend=1 00:10:59.796 --rc geninfo_all_blocks=1 00:10:59.796 --rc geninfo_unexecuted_blocks=1 00:10:59.796 00:10:59.796 ' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.796 --rc genhtml_branch_coverage=1 00:10:59.796 --rc genhtml_function_coverage=1 00:10:59.796 --rc genhtml_legend=1 00:10:59.796 --rc geninfo_all_blocks=1 00:10:59.796 --rc geninfo_unexecuted_blocks=1 00:10:59.796 00:10:59.796 ' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.796 --rc genhtml_branch_coverage=1 00:10:59.796 --rc genhtml_function_coverage=1 00:10:59.796 --rc genhtml_legend=1 00:10:59.796 --rc geninfo_all_blocks=1 00:10:59.796 --rc geninfo_unexecuted_blocks=1 00:10:59.796 00:10:59.796 ' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.796 ************************************ 00:10:59.796 START TEST nvmf_example 00:10:59.796 ************************************ 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:59.796 * Looking for test storage... 00:10:59.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.796 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.796 --rc genhtml_branch_coverage=1 00:10:59.796 --rc genhtml_function_coverage=1 00:10:59.796 --rc genhtml_legend=1 00:10:59.796 --rc geninfo_all_blocks=1 00:10:59.796 --rc geninfo_unexecuted_blocks=1 00:10:59.796 00:10:59.797 ' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.797 --rc genhtml_branch_coverage=1 00:10:59.797 --rc genhtml_function_coverage=1 00:10:59.797 --rc genhtml_legend=1 00:10:59.797 --rc geninfo_all_blocks=1 00:10:59.797 --rc geninfo_unexecuted_blocks=1 00:10:59.797 00:10:59.797 ' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.797 --rc genhtml_branch_coverage=1 00:10:59.797 --rc genhtml_function_coverage=1 00:10:59.797 --rc genhtml_legend=1 00:10:59.797 --rc geninfo_all_blocks=1 00:10:59.797 --rc geninfo_unexecuted_blocks=1 00:10:59.797 00:10:59.797 ' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.797 --rc genhtml_branch_coverage=1 00:10:59.797 --rc genhtml_function_coverage=1 00:10:59.797 --rc genhtml_legend=1 00:10:59.797 --rc geninfo_all_blocks=1 00:10:59.797 --rc geninfo_unexecuted_blocks=1 00:10:59.797 00:10:59.797 ' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.797 06:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:06.456 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:06.456 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:06.456 Found net devices under 0000:af:00.0: cvl_0_0 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:06.456 Found net devices under 0000:af:00.1: cvl_0_1 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.456 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:11:06.457 00:11:06.457 --- 10.0.0.2 ping statistics --- 00:11:06.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.457 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:11:06.457 00:11:06.457 --- 10.0.0.1 ping statistics --- 00:11:06.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.457 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=866076 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 866076 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 866076 ']' 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.457 06:01:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.717 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.976 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.976 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:06.976 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:16.959 Initializing NVMe Controllers 00:11:16.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:16.959 Initialization complete. Launching workers. 00:11:16.959 ======================================================== 00:11:16.959 Latency(us) 00:11:16.959 Device Information : IOPS MiB/s Average min max 00:11:16.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18492.19 72.24 3461.72 686.42 16297.50 00:11:16.959 ======================================================== 00:11:16.959 Total : 18492.19 72.24 3461.72 686.42 16297.50 00:11:16.959 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.219 rmmod nvme_tcp 00:11:17.219 rmmod nvme_fabrics 00:11:17.219 rmmod nvme_keyring 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 866076 ']' 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 866076 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 866076 ']' 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 866076 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866076 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866076' 00:11:17.219 killing process with pid 866076 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 866076 00:11:17.219 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 866076 00:11:17.478 nvmf threads initialize successfully 00:11:17.478 bdev subsystem init successfully 00:11:17.478 created a nvmf target service 00:11:17.478 create targets's poll groups done 00:11:17.478 all subsystems of target started 00:11:17.478 nvmf target is running 00:11:17.478 all subsystems of target stopped 00:11:17.478 destroy targets's poll groups done 00:11:17.478 destroyed the nvmf target service 00:11:17.478 bdev subsystem finish successfully 00:11:17.478 nvmf threads destroy successfully 00:11:17.478 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.478 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.478 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.479 06:01:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.387 00:11:19.387 real 0m19.825s 00:11:19.387 user 0m46.383s 00:11:19.387 sys 0m5.964s 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.387 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.387 ************************************ 00:11:19.387 END TEST nvmf_example 00:11:19.387 ************************************ 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.647 ************************************ 00:11:19.647 START TEST nvmf_filesystem 00:11:19.647 ************************************ 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:19.647 * Looking for test storage... 00:11:19.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.647 --rc genhtml_branch_coverage=1 00:11:19.647 --rc genhtml_function_coverage=1 00:11:19.647 --rc genhtml_legend=1 00:11:19.647 --rc geninfo_all_blocks=1 00:11:19.647 --rc geninfo_unexecuted_blocks=1 00:11:19.647 00:11:19.647 ' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.647 --rc genhtml_branch_coverage=1 00:11:19.647 --rc genhtml_function_coverage=1 00:11:19.647 --rc genhtml_legend=1 00:11:19.647 --rc geninfo_all_blocks=1 00:11:19.647 --rc geninfo_unexecuted_blocks=1 00:11:19.647 00:11:19.647 ' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.647 --rc genhtml_branch_coverage=1 00:11:19.647 --rc genhtml_function_coverage=1 00:11:19.647 --rc genhtml_legend=1 00:11:19.647 --rc geninfo_all_blocks=1 00:11:19.647 --rc geninfo_unexecuted_blocks=1 00:11:19.647 00:11:19.647 ' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.647 --rc genhtml_branch_coverage=1 00:11:19.647 --rc genhtml_function_coverage=1 00:11:19.647 --rc genhtml_legend=1 00:11:19.647 --rc geninfo_all_blocks=1 00:11:19.647 --rc geninfo_unexecuted_blocks=1 00:11:19.647 00:11:19.647 ' 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:19.647 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:19.648 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:19.911 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:19.911 #define SPDK_CONFIG_H 00:11:19.911 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:19.911 #define SPDK_CONFIG_APPS 1 00:11:19.911 #define SPDK_CONFIG_ARCH native 00:11:19.911 #undef SPDK_CONFIG_ASAN 00:11:19.911 #undef SPDK_CONFIG_AVAHI 00:11:19.911 #undef SPDK_CONFIG_CET 00:11:19.911 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:19.911 #define SPDK_CONFIG_COVERAGE 1 00:11:19.911 #define SPDK_CONFIG_CROSS_PREFIX 00:11:19.911 #undef SPDK_CONFIG_CRYPTO 00:11:19.911 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:19.911 #undef SPDK_CONFIG_CUSTOMOCF 00:11:19.911 #undef SPDK_CONFIG_DAOS 00:11:19.911 #define SPDK_CONFIG_DAOS_DIR 00:11:19.911 #define SPDK_CONFIG_DEBUG 1 00:11:19.911 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:19.911 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:19.911 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:19.911 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:19.911 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:19.911 #undef SPDK_CONFIG_DPDK_UADK 00:11:19.911 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.911 #define SPDK_CONFIG_EXAMPLES 1 00:11:19.911 #undef SPDK_CONFIG_FC 00:11:19.911 #define SPDK_CONFIG_FC_PATH 00:11:19.911 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:19.911 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:19.911 #define SPDK_CONFIG_FSDEV 1 00:11:19.911 #undef SPDK_CONFIG_FUSE 00:11:19.911 #undef SPDK_CONFIG_FUZZER 00:11:19.911 #define SPDK_CONFIG_FUZZER_LIB 00:11:19.911 #undef SPDK_CONFIG_GOLANG 00:11:19.911 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:19.911 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:19.911 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:19.911 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:19.911 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:19.911 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:19.911 #undef SPDK_CONFIG_HAVE_LZ4 00:11:19.911 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:19.911 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:19.912 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:19.912 #define SPDK_CONFIG_IDXD 1 00:11:19.912 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:19.912 #undef SPDK_CONFIG_IPSEC_MB 00:11:19.912 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:19.912 #define SPDK_CONFIG_ISAL 1 00:11:19.912 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:19.912 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:19.912 #define SPDK_CONFIG_LIBDIR 00:11:19.912 #undef SPDK_CONFIG_LTO 00:11:19.912 #define SPDK_CONFIG_MAX_LCORES 128 00:11:19.912 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:19.912 #define SPDK_CONFIG_NVME_CUSE 1 00:11:19.912 #undef SPDK_CONFIG_OCF 00:11:19.912 #define SPDK_CONFIG_OCF_PATH 00:11:19.912 #define SPDK_CONFIG_OPENSSL_PATH 00:11:19.912 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:19.912 #define SPDK_CONFIG_PGO_DIR 00:11:19.912 #undef SPDK_CONFIG_PGO_USE 00:11:19.912 #define SPDK_CONFIG_PREFIX /usr/local 00:11:19.912 #undef SPDK_CONFIG_RAID5F 00:11:19.912 #undef SPDK_CONFIG_RBD 00:11:19.912 #define SPDK_CONFIG_RDMA 1 00:11:19.912 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:19.912 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:19.912 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:19.912 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:19.912 #define SPDK_CONFIG_SHARED 1 00:11:19.912 #undef SPDK_CONFIG_SMA 00:11:19.912 #define SPDK_CONFIG_TESTS 1 00:11:19.912 #undef SPDK_CONFIG_TSAN 00:11:19.912 #define SPDK_CONFIG_UBLK 1 00:11:19.912 #define SPDK_CONFIG_UBSAN 1 00:11:19.912 #undef SPDK_CONFIG_UNIT_TESTS 00:11:19.912 #undef SPDK_CONFIG_URING 00:11:19.912 #define SPDK_CONFIG_URING_PATH 00:11:19.912 #undef SPDK_CONFIG_URING_ZNS 00:11:19.912 #undef SPDK_CONFIG_USDT 00:11:19.912 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:19.912 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:19.912 #define SPDK_CONFIG_VFIO_USER 1 00:11:19.912 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:19.912 #define SPDK_CONFIG_VHOST 1 00:11:19.912 #define SPDK_CONFIG_VIRTIO 1 00:11:19.912 #undef SPDK_CONFIG_VTUNE 00:11:19.912 #define SPDK_CONFIG_VTUNE_DIR 00:11:19.912 #define SPDK_CONFIG_WERROR 1 00:11:19.912 #define SPDK_CONFIG_WPDK_DIR 00:11:19.912 #undef SPDK_CONFIG_XNVME 00:11:19.912 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:19.912 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.913 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 868419 ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 868419 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tbwoYr 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tbwoYr/tests/target /tmp/spdk.tbwoYr 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:19.914 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88402751488 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7149654016 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775858688 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=344064 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:19.915 * Looking for test storage... 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88402751488 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9364246528 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.915 --rc genhtml_branch_coverage=1 00:11:19.915 --rc genhtml_function_coverage=1 00:11:19.915 --rc genhtml_legend=1 00:11:19.915 --rc geninfo_all_blocks=1 00:11:19.915 --rc geninfo_unexecuted_blocks=1 00:11:19.915 00:11:19.915 ' 00:11:19.915 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.915 --rc genhtml_branch_coverage=1 00:11:19.916 --rc genhtml_function_coverage=1 00:11:19.916 --rc genhtml_legend=1 00:11:19.916 --rc geninfo_all_blocks=1 00:11:19.916 --rc geninfo_unexecuted_blocks=1 00:11:19.916 00:11:19.916 ' 00:11:19.916 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.916 --rc genhtml_branch_coverage=1 00:11:19.916 --rc genhtml_function_coverage=1 00:11:19.916 --rc genhtml_legend=1 00:11:19.916 --rc geninfo_all_blocks=1 00:11:19.916 --rc geninfo_unexecuted_blocks=1 00:11:19.916 00:11:19.916 ' 00:11:19.916 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.916 --rc genhtml_branch_coverage=1 00:11:19.916 --rc genhtml_function_coverage=1 00:11:19.916 --rc genhtml_legend=1 00:11:19.916 --rc geninfo_all_blocks=1 00:11:19.916 --rc geninfo_unexecuted_blocks=1 00:11:19.916 00:11:19.916 ' 00:11:19.916 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.916 06:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.916 06:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:26.494 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:26.494 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:26.494 Found net devices under 0000:af:00.0: cvl_0_0 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:26.494 Found net devices under 0000:af:00.1: cvl_0_1 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.494 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:11:26.495 00:11:26.495 --- 10.0.0.2 ping statistics --- 00:11:26.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.495 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:26.495 00:11:26.495 --- 10.0.0.1 ping statistics --- 00:11:26.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.495 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.495 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 ************************************ 00:11:26.495 START TEST nvmf_filesystem_no_in_capsule 00:11:26.495 ************************************ 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=871590 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 871590 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 871590 ']' 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 [2024-12-15 06:01:46.132670] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:26.495 [2024-12-15 06:01:46.132715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.495 [2024-12-15 06:01:46.212627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.495 [2024-12-15 06:01:46.235300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.495 [2024-12-15 06:01:46.235335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.495 [2024-12-15 06:01:46.235342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.495 [2024-12-15 06:01:46.235348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.495 [2024-12-15 06:01:46.235353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.495 [2024-12-15 06:01:46.236779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.495 [2024-12-15 06:01:46.236893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.495 [2024-12-15 06:01:46.237012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.495 [2024-12-15 06:01:46.237014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 [2024-12-15 06:01:46.364456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 Malloc1 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.495 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.496 [2024-12-15 06:01:46.511621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:26.496 { 00:11:26.496 "name": "Malloc1", 00:11:26.496 "aliases": [ 00:11:26.496 "ad00eebf-c26e-4be5-bed1-6a7324b1727f" 00:11:26.496 ], 00:11:26.496 "product_name": "Malloc disk", 00:11:26.496 "block_size": 512, 00:11:26.496 "num_blocks": 1048576, 00:11:26.496 "uuid": "ad00eebf-c26e-4be5-bed1-6a7324b1727f", 00:11:26.496 "assigned_rate_limits": { 00:11:26.496 "rw_ios_per_sec": 0, 00:11:26.496 "rw_mbytes_per_sec": 0, 00:11:26.496 "r_mbytes_per_sec": 0, 00:11:26.496 "w_mbytes_per_sec": 0 00:11:26.496 }, 00:11:26.496 "claimed": true, 00:11:26.496 "claim_type": "exclusive_write", 00:11:26.496 "zoned": false, 00:11:26.496 "supported_io_types": { 00:11:26.496 "read": true, 00:11:26.496 "write": true, 00:11:26.496 "unmap": true, 00:11:26.496 "flush": true, 00:11:26.496 "reset": true, 00:11:26.496 "nvme_admin": false, 00:11:26.496 "nvme_io": false, 00:11:26.496 "nvme_io_md": false, 00:11:26.496 "write_zeroes": true, 00:11:26.496 "zcopy": true, 00:11:26.496 "get_zone_info": false, 00:11:26.496 "zone_management": false, 00:11:26.496 "zone_append": false, 00:11:26.496 "compare": false, 00:11:26.496 "compare_and_write": false, 00:11:26.496 "abort": true, 00:11:26.496 "seek_hole": false, 00:11:26.496 "seek_data": false, 00:11:26.496 "copy": true, 00:11:26.496 "nvme_iov_md": false 00:11:26.496 }, 00:11:26.496 "memory_domains": [ 00:11:26.496 { 00:11:26.496 "dma_device_id": "system", 00:11:26.496 "dma_device_type": 1 00:11:26.496 }, 00:11:26.496 { 00:11:26.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.496 "dma_device_type": 2 00:11:26.496 } 00:11:26.496 ], 00:11:26.496 "driver_specific": {} 00:11:26.496 } 00:11:26.496 ]' 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:26.496 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:26.825 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:26.825 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:26.825 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:26.825 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:26.825 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.847 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.847 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.847 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.847 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.848 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.752 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:30.012 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:30.579 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.516 ************************************ 00:11:31.516 START TEST filesystem_ext4 00:11:31.516 ************************************ 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:31.516 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:31.516 mke2fs 1.47.0 (5-Feb-2023) 00:11:31.775 Discarding device blocks: 0/522240 done 00:11:31.775 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:31.775 Filesystem UUID: b49e65d6-16e0-4bce-b4e5-18056d1530c5 00:11:31.775 Superblock backups stored on blocks: 00:11:31.775 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:31.775 00:11:31.775 Allocating group tables: 0/64 done 00:11:31.775 Writing inode tables: 0/64 done 00:11:32.034 Creating journal (8192 blocks): done 00:11:32.034 Writing superblocks and filesystem accounting information: 0/64 done 00:11:32.034 00:11:32.034 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:32.034 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 871590 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.305 00:11:37.305 real 0m5.782s 00:11:37.305 user 0m0.033s 00:11:37.305 sys 0m0.066s 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.305 ************************************ 00:11:37.305 END TEST filesystem_ext4 00:11:37.305 ************************************ 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.305 ************************************ 00:11:37.305 START TEST filesystem_btrfs 00:11:37.305 ************************************ 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.305 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:37.306 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.564 btrfs-progs v6.8.1 00:11:37.564 See https://btrfs.readthedocs.io for more information. 00:11:37.564 00:11:37.564 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.564 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.564 this does not affect your deployments: 00:11:37.564 - DUP for metadata (-m dup) 00:11:37.564 - enabled no-holes (-O no-holes) 00:11:37.564 - enabled free-space-tree (-R free-space-tree) 00:11:37.564 00:11:37.564 Label: (null) 00:11:37.564 UUID: c57ea6bd-df68-4529-9e80-92b3ccbc86d9 00:11:37.564 Node size: 16384 00:11:37.564 Sector size: 4096 (CPU page size: 4096) 00:11:37.564 Filesystem size: 510.00MiB 00:11:37.564 Block group profiles: 00:11:37.564 Data: single 8.00MiB 00:11:37.564 Metadata: DUP 32.00MiB 00:11:37.564 System: DUP 8.00MiB 00:11:37.564 SSD detected: yes 00:11:37.564 Zoned device: no 00:11:37.564 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.564 Checksum: crc32c 00:11:37.564 Number of devices: 1 00:11:37.564 Devices: 00:11:37.564 ID SIZE PATH 00:11:37.564 1 510.00MiB /dev/nvme0n1p1 00:11:37.564 00:11:37.564 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:37.564 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 871590 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.823 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.082 00:11:38.082 real 0m0.540s 00:11:38.082 user 0m0.030s 00:11:38.082 sys 0m0.107s 00:11:38.082 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.082 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.082 ************************************ 00:11:38.082 END TEST filesystem_btrfs 00:11:38.082 ************************************ 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.082 ************************************ 00:11:38.082 START TEST filesystem_xfs 00:11:38.082 ************************************ 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.082 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:38.082 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:38.082 = sectsz=512 attr=2, projid32bit=1 00:11:38.082 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:38.082 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:38.082 data = bsize=4096 blocks=130560, imaxpct=25 00:11:38.082 = sunit=0 swidth=0 blks 00:11:38.082 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:38.082 log =internal log bsize=4096 blocks=16384, version=2 00:11:38.082 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:38.082 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:39.019 Discarding blocks...Done. 00:11:39.019 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:39.019 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 871590 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.554 00:11:41.554 real 0m3.310s 00:11:41.554 user 0m0.024s 00:11:41.554 sys 0m0.075s 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.554 ************************************ 00:11:41.554 END TEST filesystem_xfs 00:11:41.554 ************************************ 00:11:41.554 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 871590 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 871590 ']' 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 871590 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871590 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871590' 00:11:41.814 killing process with pid 871590 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 871590 00:11:41.814 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 871590 00:11:42.382 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.382 00:11:42.382 real 0m16.158s 00:11:42.382 user 1m3.662s 00:11:42.382 sys 0m1.326s 00:11:42.382 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 ************************************ 00:11:42.383 END TEST nvmf_filesystem_no_in_capsule 00:11:42.383 ************************************ 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 ************************************ 00:11:42.383 START TEST nvmf_filesystem_in_capsule 00:11:42.383 ************************************ 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=874329 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 874329 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 874329 ']' 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.383 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.383 [2024-12-15 06:02:02.363459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:42.383 [2024-12-15 06:02:02.363502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.383 [2024-12-15 06:02:02.445909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.383 [2024-12-15 06:02:02.466645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.383 [2024-12-15 06:02:02.466687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.383 [2024-12-15 06:02:02.466694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.383 [2024-12-15 06:02:02.466700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.383 [2024-12-15 06:02:02.466705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.383 [2024-12-15 06:02:02.468166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.383 [2024-12-15 06:02:02.468274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.383 [2024-12-15 06:02:02.468379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.383 [2024-12-15 06:02:02.468380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 [2024-12-15 06:02:02.608672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.642 [2024-12-15 06:02:02.763148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:42.642 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.643 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:42.901 { 00:11:42.901 "name": "Malloc1", 00:11:42.901 "aliases": [ 00:11:42.901 "89cbcb1d-d98b-4878-91dd-36391046d1cc" 00:11:42.901 ], 00:11:42.901 "product_name": "Malloc disk", 00:11:42.901 "block_size": 512, 00:11:42.901 "num_blocks": 1048576, 00:11:42.901 "uuid": "89cbcb1d-d98b-4878-91dd-36391046d1cc", 00:11:42.901 "assigned_rate_limits": { 00:11:42.901 "rw_ios_per_sec": 0, 00:11:42.901 "rw_mbytes_per_sec": 0, 00:11:42.901 "r_mbytes_per_sec": 0, 00:11:42.901 "w_mbytes_per_sec": 0 00:11:42.901 }, 00:11:42.901 "claimed": true, 00:11:42.901 "claim_type": "exclusive_write", 00:11:42.901 "zoned": false, 00:11:42.901 "supported_io_types": { 00:11:42.901 "read": true, 00:11:42.901 "write": true, 00:11:42.901 "unmap": true, 00:11:42.901 "flush": true, 00:11:42.901 "reset": true, 00:11:42.901 "nvme_admin": false, 00:11:42.901 "nvme_io": false, 00:11:42.901 "nvme_io_md": false, 00:11:42.901 "write_zeroes": true, 00:11:42.901 "zcopy": true, 00:11:42.901 "get_zone_info": false, 00:11:42.901 "zone_management": false, 00:11:42.901 "zone_append": false, 00:11:42.901 "compare": false, 00:11:42.901 "compare_and_write": false, 00:11:42.901 "abort": true, 00:11:42.901 "seek_hole": false, 00:11:42.901 "seek_data": false, 00:11:42.901 "copy": true, 00:11:42.901 "nvme_iov_md": false 00:11:42.901 }, 00:11:42.901 "memory_domains": [ 00:11:42.901 { 00:11:42.901 "dma_device_id": "system", 00:11:42.901 "dma_device_type": 1 00:11:42.901 }, 00:11:42.901 { 00:11:42.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.901 "dma_device_type": 2 00:11:42.901 } 00:11:42.901 ], 00:11:42.901 "driver_specific": {} 00:11:42.901 } 00:11:42.901 ]' 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:42.901 06:02:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.839 06:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.839 06:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.839 06:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.839 06:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:43.839 06:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:46.372 06:02:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:46.372 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:46.632 06:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.569 ************************************ 00:11:47.569 START TEST filesystem_in_capsule_ext4 00:11:47.569 ************************************ 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:47.569 06:02:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:47.569 mke2fs 1.47.0 (5-Feb-2023) 00:11:47.829 Discarding device blocks: 0/522240 done 00:11:47.829 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:47.829 Filesystem UUID: f0d298ee-34cb-43ac-bb1e-0462e149e09c 00:11:47.829 Superblock backups stored on blocks: 00:11:47.829 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:47.829 00:11:47.829 Allocating group tables: 0/64 done 00:11:47.829 Writing inode tables: 0/64 done 00:11:48.397 Creating journal (8192 blocks): done 00:11:50.551 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:50.551 00:11:50.551 06:02:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:50.551 06:02:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 874329 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.824 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.083 00:11:56.083 real 0m8.293s 00:11:56.083 user 0m0.029s 00:11:56.083 sys 0m0.072s 00:11:56.083 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.083 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:56.083 ************************************ 00:11:56.083 END TEST filesystem_in_capsule_ext4 00:11:56.083 ************************************ 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.083 ************************************ 00:11:56.083 START TEST filesystem_in_capsule_btrfs 00:11:56.083 ************************************ 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.083 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:56.342 btrfs-progs v6.8.1 00:11:56.342 See https://btrfs.readthedocs.io for more information. 00:11:56.342 00:11:56.342 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:56.342 NOTE: several default settings have changed in version 5.15, please make sure 00:11:56.342 this does not affect your deployments: 00:11:56.342 - DUP for metadata (-m dup) 00:11:56.342 - enabled no-holes (-O no-holes) 00:11:56.342 - enabled free-space-tree (-R free-space-tree) 00:11:56.342 00:11:56.342 Label: (null) 00:11:56.342 UUID: afea03c0-49e1-493b-a6a7-db76b77d579b 00:11:56.342 Node size: 16384 00:11:56.342 Sector size: 4096 (CPU page size: 4096) 00:11:56.342 Filesystem size: 510.00MiB 00:11:56.342 Block group profiles: 00:11:56.342 Data: single 8.00MiB 00:11:56.342 Metadata: DUP 32.00MiB 00:11:56.342 System: DUP 8.00MiB 00:11:56.342 SSD detected: yes 00:11:56.342 Zoned device: no 00:11:56.342 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:56.342 Checksum: crc32c 00:11:56.342 Number of devices: 1 00:11:56.342 Devices: 00:11:56.342 ID SIZE PATH 00:11:56.342 1 510.00MiB /dev/nvme0n1p1 00:11:56.342 00:11:56.342 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:56.342 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.342 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.342 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:56.342 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 874329 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.601 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.602 00:11:56.602 real 0m0.492s 00:11:56.602 user 0m0.029s 00:11:56.602 sys 0m0.113s 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 ************************************ 00:11:56.602 END TEST filesystem_in_capsule_btrfs 00:11:56.602 ************************************ 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.602 ************************************ 00:11:56.602 START TEST filesystem_in_capsule_xfs 00:11:56.602 ************************************ 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.602 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.602 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.602 = sectsz=512 attr=2, projid32bit=1 00:11:56.602 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.602 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.602 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.602 = sunit=0 swidth=0 blks 00:11:56.602 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.602 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.602 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.602 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:57.979 Discarding blocks...Done. 00:11:57.979 06:02:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.979 06:02:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.884 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 874329 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.885 00:11:59.885 real 0m3.087s 00:11:59.885 user 0m0.026s 00:11:59.885 sys 0m0.072s 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.885 ************************************ 00:11:59.885 END TEST filesystem_in_capsule_xfs 00:11:59.885 ************************************ 00:11:59.885 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 874329 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 874329 ']' 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 874329 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874329 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874329' 00:12:00.144 killing process with pid 874329 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 874329 00:12:00.144 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 874329 00:12:00.403 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.403 00:12:00.403 real 0m18.208s 00:12:00.403 user 1m11.755s 00:12:00.403 sys 0m1.408s 00:12:00.403 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.403 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.403 ************************************ 00:12:00.403 END TEST nvmf_filesystem_in_capsule 00:12:00.403 ************************************ 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.662 rmmod nvme_tcp 00:12:00.662 rmmod nvme_fabrics 00:12:00.662 rmmod nvme_keyring 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.662 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.567 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.826 00:12:02.826 real 0m43.124s 00:12:02.826 user 2m17.430s 00:12:02.826 sys 0m7.476s 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.826 ************************************ 00:12:02.826 END TEST nvmf_filesystem 00:12:02.826 ************************************ 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.826 ************************************ 00:12:02.826 START TEST nvmf_target_discovery 00:12:02.826 ************************************ 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.826 * Looking for test storage... 00:12:02.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.826 --rc genhtml_branch_coverage=1 00:12:02.826 --rc genhtml_function_coverage=1 00:12:02.826 --rc genhtml_legend=1 00:12:02.826 --rc geninfo_all_blocks=1 00:12:02.826 --rc geninfo_unexecuted_blocks=1 00:12:02.826 00:12:02.826 ' 00:12:02.826 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.826 --rc genhtml_branch_coverage=1 00:12:02.827 --rc genhtml_function_coverage=1 00:12:02.827 --rc genhtml_legend=1 00:12:02.827 --rc geninfo_all_blocks=1 00:12:02.827 --rc geninfo_unexecuted_blocks=1 00:12:02.827 00:12:02.827 ' 00:12:02.827 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.827 --rc genhtml_branch_coverage=1 00:12:02.827 --rc genhtml_function_coverage=1 00:12:02.827 --rc genhtml_legend=1 00:12:02.827 --rc geninfo_all_blocks=1 00:12:02.827 --rc geninfo_unexecuted_blocks=1 00:12:02.827 00:12:02.827 ' 00:12:02.827 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.827 --rc genhtml_branch_coverage=1 00:12:02.827 --rc genhtml_function_coverage=1 00:12:02.827 --rc genhtml_legend=1 00:12:02.827 --rc geninfo_all_blocks=1 00:12:02.827 --rc geninfo_unexecuted_blocks=1 00:12:02.827 00:12:02.827 ' 00:12:02.827 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.827 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.086 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.087 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.087 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.087 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.087 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.087 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:09.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.657 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:09.658 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:09.658 Found net devices under 0000:af:00.0: cvl_0_0 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:09.658 Found net devices under 0000:af:00.1: cvl_0_1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:12:09.658 00:12:09.658 --- 10.0.0.2 ping statistics --- 00:12:09.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.658 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:09.658 00:12:09.658 --- 10.0.0.1 ping statistics --- 00:12:09.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.658 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=880943 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 880943 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 880943 ']' 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.658 06:02:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.658 [2024-12-15 06:02:28.974413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:09.658 [2024-12-15 06:02:28.974463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.658 [2024-12-15 06:02:29.051112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.658 [2024-12-15 06:02:29.074063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.658 [2024-12-15 06:02:29.074103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.659 [2024-12-15 06:02:29.074110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.659 [2024-12-15 06:02:29.074117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.659 [2024-12-15 06:02:29.074122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.659 [2024-12-15 06:02:29.075575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.659 [2024-12-15 06:02:29.075663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.659 [2024-12-15 06:02:29.075773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.659 [2024-12-15 06:02:29.075775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 [2024-12-15 06:02:29.208541] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 Null1 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 [2024-12-15 06:02:29.273131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 Null2 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 Null3 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 Null4 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.659 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:09.659 00:12:09.659 Discovery Log Number of Records 6, Generation counter 6 00:12:09.659 =====Discovery Log Entry 0====== 00:12:09.659 trtype: tcp 00:12:09.659 adrfam: ipv4 00:12:09.659 subtype: current discovery subsystem 00:12:09.659 treq: not required 00:12:09.659 portid: 0 00:12:09.659 trsvcid: 4420 00:12:09.659 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.659 traddr: 10.0.0.2 00:12:09.659 eflags: explicit discovery connections, duplicate discovery information 00:12:09.659 sectype: none 00:12:09.659 =====Discovery Log Entry 1====== 00:12:09.659 trtype: tcp 00:12:09.660 adrfam: ipv4 00:12:09.660 subtype: nvme subsystem 00:12:09.660 treq: not required 00:12:09.660 portid: 0 00:12:09.660 trsvcid: 4420 00:12:09.660 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:09.660 traddr: 10.0.0.2 00:12:09.660 eflags: none 00:12:09.660 sectype: none 00:12:09.660 =====Discovery Log Entry 2====== 00:12:09.660 trtype: tcp 00:12:09.660 adrfam: ipv4 00:12:09.660 subtype: nvme subsystem 00:12:09.660 treq: not required 00:12:09.660 portid: 0 00:12:09.660 trsvcid: 4420 00:12:09.660 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:09.660 traddr: 10.0.0.2 00:12:09.660 eflags: none 00:12:09.660 sectype: none 00:12:09.660 =====Discovery Log Entry 3====== 00:12:09.660 trtype: tcp 00:12:09.660 adrfam: ipv4 00:12:09.660 subtype: nvme subsystem 00:12:09.660 treq: not required 00:12:09.660 portid: 0 00:12:09.660 trsvcid: 4420 00:12:09.660 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:09.660 traddr: 10.0.0.2 00:12:09.660 eflags: none 00:12:09.660 sectype: none 00:12:09.660 =====Discovery Log Entry 4====== 00:12:09.660 trtype: tcp 00:12:09.660 adrfam: ipv4 00:12:09.660 subtype: nvme subsystem 00:12:09.660 treq: not required 00:12:09.660 portid: 0 00:12:09.660 trsvcid: 4420 00:12:09.660 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:09.660 traddr: 10.0.0.2 00:12:09.660 eflags: none 00:12:09.660 sectype: none 00:12:09.660 =====Discovery Log Entry 5====== 00:12:09.660 trtype: tcp 00:12:09.660 adrfam: ipv4 00:12:09.660 subtype: discovery subsystem referral 00:12:09.660 treq: not required 00:12:09.660 portid: 0 00:12:09.660 trsvcid: 4430 00:12:09.660 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.660 traddr: 10.0.0.2 00:12:09.660 eflags: none 00:12:09.660 sectype: none 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:09.660 Perform nvmf subsystem discovery via RPC 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 [ 00:12:09.660 { 00:12:09.660 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.660 "subtype": "Discovery", 00:12:09.660 "listen_addresses": [ 00:12:09.660 { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 } 00:12:09.660 ], 00:12:09.660 "allow_any_host": true, 00:12:09.660 "hosts": [] 00:12:09.660 }, 00:12:09.660 { 00:12:09.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.660 "subtype": "NVMe", 00:12:09.660 "listen_addresses": [ 00:12:09.660 { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 } 00:12:09.660 ], 00:12:09.660 "allow_any_host": true, 00:12:09.660 "hosts": [], 00:12:09.660 "serial_number": "SPDK00000000000001", 00:12:09.660 "model_number": "SPDK bdev Controller", 00:12:09.660 "max_namespaces": 32, 00:12:09.660 "min_cntlid": 1, 00:12:09.660 "max_cntlid": 65519, 00:12:09.660 "namespaces": [ 00:12:09.660 { 00:12:09.660 "nsid": 1, 00:12:09.660 "bdev_name": "Null1", 00:12:09.660 "name": "Null1", 00:12:09.660 "nguid": "08E3F66F2EA34925B3227994652A0A62", 00:12:09.660 "uuid": "08e3f66f-2ea3-4925-b322-7994652a0a62" 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 }, 00:12:09.660 { 00:12:09.660 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.660 "subtype": "NVMe", 00:12:09.660 "listen_addresses": [ 00:12:09.660 { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 } 00:12:09.660 ], 00:12:09.660 "allow_any_host": true, 00:12:09.660 "hosts": [], 00:12:09.660 "serial_number": "SPDK00000000000002", 00:12:09.660 "model_number": "SPDK bdev Controller", 00:12:09.660 "max_namespaces": 32, 00:12:09.660 "min_cntlid": 1, 00:12:09.660 "max_cntlid": 65519, 00:12:09.660 "namespaces": [ 00:12:09.660 { 00:12:09.660 "nsid": 1, 00:12:09.660 "bdev_name": "Null2", 00:12:09.660 "name": "Null2", 00:12:09.660 "nguid": "43E9152344984281827F0D3FCEB1113E", 00:12:09.660 "uuid": "43e91523-4498-4281-827f-0d3fceb1113e" 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 }, 00:12:09.660 { 00:12:09.660 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:09.660 "subtype": "NVMe", 00:12:09.660 "listen_addresses": [ 00:12:09.660 { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 } 00:12:09.660 ], 00:12:09.660 "allow_any_host": true, 00:12:09.660 "hosts": [], 00:12:09.660 "serial_number": "SPDK00000000000003", 00:12:09.660 "model_number": "SPDK bdev Controller", 00:12:09.660 "max_namespaces": 32, 00:12:09.660 "min_cntlid": 1, 00:12:09.660 "max_cntlid": 65519, 00:12:09.660 "namespaces": [ 00:12:09.660 { 00:12:09.660 "nsid": 1, 00:12:09.660 "bdev_name": "Null3", 00:12:09.660 "name": "Null3", 00:12:09.660 "nguid": "5B59D03885764CF985BA88D925C13B96", 00:12:09.660 "uuid": "5b59d038-8576-4cf9-85ba-88d925c13b96" 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 }, 00:12:09.660 { 00:12:09.660 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:09.660 "subtype": "NVMe", 00:12:09.660 "listen_addresses": [ 00:12:09.660 { 00:12:09.660 "trtype": "TCP", 00:12:09.660 "adrfam": "IPv4", 00:12:09.660 "traddr": "10.0.0.2", 00:12:09.660 "trsvcid": "4420" 00:12:09.660 } 00:12:09.660 ], 00:12:09.660 "allow_any_host": true, 00:12:09.660 "hosts": [], 00:12:09.660 "serial_number": "SPDK00000000000004", 00:12:09.660 "model_number": "SPDK bdev Controller", 00:12:09.660 "max_namespaces": 32, 00:12:09.660 "min_cntlid": 1, 00:12:09.660 "max_cntlid": 65519, 00:12:09.660 "namespaces": [ 00:12:09.660 { 00:12:09.660 "nsid": 1, 00:12:09.660 "bdev_name": "Null4", 00:12:09.660 "name": "Null4", 00:12:09.660 "nguid": "D9C07277787E41E9865054D7EDE0FC67", 00:12:09.660 "uuid": "d9c07277-787e-41e9-8650-54d7ede0fc67" 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 } 00:12:09.660 ] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.660 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.661 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.661 rmmod nvme_tcp 00:12:09.661 rmmod nvme_fabrics 00:12:09.661 rmmod nvme_keyring 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 880943 ']' 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 880943 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 880943 ']' 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 880943 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880943 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880943' 00:12:09.922 killing process with pid 880943 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 880943 00:12:09.922 06:02:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 880943 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.922 06:02:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.537 00:12:12.537 real 0m9.315s 00:12:12.537 user 0m5.656s 00:12:12.537 sys 0m4.801s 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.537 ************************************ 00:12:12.537 END TEST nvmf_target_discovery 00:12:12.537 ************************************ 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.537 ************************************ 00:12:12.537 START TEST nvmf_referrals 00:12:12.537 ************************************ 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.537 * Looking for test storage... 00:12:12.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.537 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.538 --rc genhtml_branch_coverage=1 00:12:12.538 --rc genhtml_function_coverage=1 00:12:12.538 --rc genhtml_legend=1 00:12:12.538 --rc geninfo_all_blocks=1 00:12:12.538 --rc geninfo_unexecuted_blocks=1 00:12:12.538 00:12:12.538 ' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.538 --rc genhtml_branch_coverage=1 00:12:12.538 --rc genhtml_function_coverage=1 00:12:12.538 --rc genhtml_legend=1 00:12:12.538 --rc geninfo_all_blocks=1 00:12:12.538 --rc geninfo_unexecuted_blocks=1 00:12:12.538 00:12:12.538 ' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.538 --rc genhtml_branch_coverage=1 00:12:12.538 --rc genhtml_function_coverage=1 00:12:12.538 --rc genhtml_legend=1 00:12:12.538 --rc geninfo_all_blocks=1 00:12:12.538 --rc geninfo_unexecuted_blocks=1 00:12:12.538 00:12:12.538 ' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.538 --rc genhtml_branch_coverage=1 00:12:12.538 --rc genhtml_function_coverage=1 00:12:12.538 --rc genhtml_legend=1 00:12:12.538 --rc geninfo_all_blocks=1 00:12:12.538 --rc geninfo_unexecuted_blocks=1 00:12:12.538 00:12:12.538 ' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.538 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.539 06:02:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:19.111 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:19.111 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:19.111 Found net devices under 0000:af:00.0: cvl_0_0 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:19.111 Found net devices under 0000:af:00.1: cvl_0_1 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.111 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:12:19.112 00:12:19.112 --- 10.0.0.2 ping statistics --- 00:12:19.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.112 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:19.112 00:12:19.112 --- 10.0.0.1 ping statistics --- 00:12:19.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.112 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=884649 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 884649 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 884649 ']' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 [2024-12-15 06:02:38.399974] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:19.112 [2024-12-15 06:02:38.400024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.112 [2024-12-15 06:02:38.478153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.112 [2024-12-15 06:02:38.500457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.112 [2024-12-15 06:02:38.500495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.112 [2024-12-15 06:02:38.500502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.112 [2024-12-15 06:02:38.500508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.112 [2024-12-15 06:02:38.500513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.112 [2024-12-15 06:02:38.501946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.112 [2024-12-15 06:02:38.502066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.112 [2024-12-15 06:02:38.502107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.112 [2024-12-15 06:02:38.502108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 [2024-12-15 06:02:38.641765] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 [2024-12-15 06:02:38.677178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.112 06:02:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.112 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:19.371 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.372 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.629 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:19.887 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:19.888 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:19.888 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:19.888 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:19.888 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:19.888 06:02:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.145 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:20.145 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:20.145 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:20.145 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.146 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.404 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:20.405 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.664 rmmod nvme_tcp 00:12:20.664 rmmod nvme_fabrics 00:12:20.664 rmmod nvme_keyring 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 884649 ']' 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 884649 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 884649 ']' 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 884649 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.664 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884649 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884649' 00:12:20.923 killing process with pid 884649 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 884649 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 884649 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.923 06:02:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.459 00:12:23.459 real 0m10.880s 00:12:23.459 user 0m12.471s 00:12:23.459 sys 0m5.131s 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.459 ************************************ 00:12:23.459 END TEST nvmf_referrals 00:12:23.459 ************************************ 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.459 ************************************ 00:12:23.459 START TEST nvmf_connect_disconnect 00:12:23.459 ************************************ 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:23.459 * Looking for test storage... 00:12:23.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.459 --rc genhtml_branch_coverage=1 00:12:23.459 --rc genhtml_function_coverage=1 00:12:23.459 --rc genhtml_legend=1 00:12:23.459 --rc geninfo_all_blocks=1 00:12:23.459 --rc geninfo_unexecuted_blocks=1 00:12:23.459 00:12:23.459 ' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.459 --rc genhtml_branch_coverage=1 00:12:23.459 --rc genhtml_function_coverage=1 00:12:23.459 --rc genhtml_legend=1 00:12:23.459 --rc geninfo_all_blocks=1 00:12:23.459 --rc geninfo_unexecuted_blocks=1 00:12:23.459 00:12:23.459 ' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.459 --rc genhtml_branch_coverage=1 00:12:23.459 --rc genhtml_function_coverage=1 00:12:23.459 --rc genhtml_legend=1 00:12:23.459 --rc geninfo_all_blocks=1 00:12:23.459 --rc geninfo_unexecuted_blocks=1 00:12:23.459 00:12:23.459 ' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:23.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.459 --rc genhtml_branch_coverage=1 00:12:23.459 --rc genhtml_function_coverage=1 00:12:23.459 --rc genhtml_legend=1 00:12:23.459 --rc geninfo_all_blocks=1 00:12:23.459 --rc geninfo_unexecuted_blocks=1 00:12:23.459 00:12:23.459 ' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.459 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.460 06:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:30.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:30.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.029 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:30.030 Found net devices under 0000:af:00.0: cvl_0_0 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:30.030 Found net devices under 0000:af:00.1: cvl_0_1 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.030 06:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:12:30.030 00:12:30.030 --- 10.0.0.2 ping statistics --- 00:12:30.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.030 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:30.030 00:12:30.030 --- 10.0.0.1 ping statistics --- 00:12:30.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.030 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=888658 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 888658 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 888658 ']' 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 [2024-12-15 06:02:49.429626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:30.030 [2024-12-15 06:02:49.429677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.030 [2024-12-15 06:02:49.509684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.030 [2024-12-15 06:02:49.532534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.030 [2024-12-15 06:02:49.532573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.030 [2024-12-15 06:02:49.532580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.030 [2024-12-15 06:02:49.532586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.030 [2024-12-15 06:02:49.532591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.030 [2024-12-15 06:02:49.533900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.030 [2024-12-15 06:02:49.534038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.030 [2024-12-15 06:02:49.534083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.030 [2024-12-15 06:02:49.534085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 [2024-12-15 06:02:49.670817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.031 [2024-12-15 06:02:49.729372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:30.031 06:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:31.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.690 [2024-12-15 06:06:04.426856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d080 is same with the state(6) to be set 00:15:44.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.582 [2024-12-15 06:06:32.102979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cee0 is same with the state(6) to be set 00:16:12.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.563 rmmod nvme_tcp 00:16:21.563 rmmod nvme_fabrics 00:16:21.563 rmmod nvme_keyring 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 888658 ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 888658 ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888658' 00:16:21.563 killing process with pid 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 888658 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.563 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.099 00:16:24.099 real 4m0.532s 00:16:24.099 user 15m18.770s 00:16:24.099 sys 0m24.507s 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:24.099 ************************************ 00:16:24.099 END TEST nvmf_connect_disconnect 00:16:24.099 ************************************ 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.099 ************************************ 00:16:24.099 START TEST nvmf_multitarget 00:16:24.099 ************************************ 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:24.099 * Looking for test storage... 00:16:24.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.099 --rc genhtml_branch_coverage=1 00:16:24.099 --rc genhtml_function_coverage=1 00:16:24.099 --rc genhtml_legend=1 00:16:24.099 --rc geninfo_all_blocks=1 00:16:24.099 --rc geninfo_unexecuted_blocks=1 00:16:24.099 00:16:24.099 ' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.099 --rc genhtml_branch_coverage=1 00:16:24.099 --rc genhtml_function_coverage=1 00:16:24.099 --rc genhtml_legend=1 00:16:24.099 --rc geninfo_all_blocks=1 00:16:24.099 --rc geninfo_unexecuted_blocks=1 00:16:24.099 00:16:24.099 ' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.099 --rc genhtml_branch_coverage=1 00:16:24.099 --rc genhtml_function_coverage=1 00:16:24.099 --rc genhtml_legend=1 00:16:24.099 --rc geninfo_all_blocks=1 00:16:24.099 --rc geninfo_unexecuted_blocks=1 00:16:24.099 00:16:24.099 ' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:24.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.099 --rc genhtml_branch_coverage=1 00:16:24.099 --rc genhtml_function_coverage=1 00:16:24.099 --rc genhtml_legend=1 00:16:24.099 --rc geninfo_all_blocks=1 00:16:24.099 --rc geninfo_unexecuted_blocks=1 00:16:24.099 00:16:24.099 ' 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.099 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.100 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:30.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:30.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:30.670 Found net devices under 0000:af:00.0: cvl_0_0 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:30.670 Found net devices under 0000:af:00.1: cvl_0_1 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.670 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:16:30.671 00:16:30.671 --- 10.0.0.2 ping statistics --- 00:16:30.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.671 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:16:30.671 00:16:30.671 --- 10.0.0.1 ping statistics --- 00:16:30.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.671 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=932023 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 932023 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 932023 ']' 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.671 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 [2024-12-15 06:06:49.900833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:30.671 [2024-12-15 06:06:49.900876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.671 [2024-12-15 06:06:49.978511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.671 [2024-12-15 06:06:50.001205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.671 [2024-12-15 06:06:50.001251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.671 [2024-12-15 06:06:50.001262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.671 [2024-12-15 06:06:50.001270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.671 [2024-12-15 06:06:50.001275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.671 [2024-12-15 06:06:50.002615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.671 [2024-12-15 06:06:50.002732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.671 [2024-12-15 06:06:50.002867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.671 [2024-12-15 06:06:50.002867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:30.671 "nvmf_tgt_1" 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:30.671 "nvmf_tgt_2" 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:30.671 true 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:30.671 true 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:30.671 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:30.930 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:30.931 rmmod nvme_tcp 00:16:30.931 rmmod nvme_fabrics 00:16:30.931 rmmod nvme_keyring 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 932023 ']' 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 932023 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 932023 ']' 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 932023 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.931 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932023 00:16:30.931 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.931 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.931 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932023' 00:16:30.931 killing process with pid 932023 00:16:30.931 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 932023 00:16:30.931 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 932023 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.190 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:33.728 00:16:33.728 real 0m9.517s 00:16:33.728 user 0m7.141s 00:16:33.728 sys 0m4.874s 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 ************************************ 00:16:33.728 END TEST nvmf_multitarget 00:16:33.728 ************************************ 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.728 ************************************ 00:16:33.728 START TEST nvmf_rpc 00:16:33.728 ************************************ 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:33.728 * Looking for test storage... 00:16:33.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.728 --rc genhtml_branch_coverage=1 00:16:33.728 --rc genhtml_function_coverage=1 00:16:33.728 --rc genhtml_legend=1 00:16:33.728 --rc geninfo_all_blocks=1 00:16:33.728 --rc geninfo_unexecuted_blocks=1 00:16:33.728 00:16:33.728 ' 00:16:33.728 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.728 --rc genhtml_branch_coverage=1 00:16:33.728 --rc genhtml_function_coverage=1 00:16:33.728 --rc genhtml_legend=1 00:16:33.728 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.729 --rc genhtml_branch_coverage=1 00:16:33.729 --rc genhtml_function_coverage=1 00:16:33.729 --rc genhtml_legend=1 00:16:33.729 --rc geninfo_all_blocks=1 00:16:33.729 --rc geninfo_unexecuted_blocks=1 00:16:33.729 00:16:33.729 ' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:33.729 06:06:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:39.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:39.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:39.006 Found net devices under 0000:af:00.0: cvl_0_0 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.006 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:39.006 Found net devices under 0000:af:00.1: cvl_0_1 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:39.007 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:39.266 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:39.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:16:39.526 00:16:39.526 --- 10.0.0.2 ping statistics --- 00:16:39.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.526 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:16:39.526 00:16:39.526 --- 10.0.0.1 ping statistics --- 00:16:39.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.526 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=935720 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 935720 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 935720 ']' 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.526 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.526 [2024-12-15 06:06:59.543183] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:39.526 [2024-12-15 06:06:59.543235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.526 [2024-12-15 06:06:59.621242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.526 [2024-12-15 06:06:59.644987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.526 [2024-12-15 06:06:59.645027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.526 [2024-12-15 06:06:59.645033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.526 [2024-12-15 06:06:59.645039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.526 [2024-12-15 06:06:59.645044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.526 [2024-12-15 06:06:59.646467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.526 [2024-12-15 06:06:59.646578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.526 [2024-12-15 06:06:59.646661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.526 [2024-12-15 06:06:59.646662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.785 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:39.785 "tick_rate": 2100000000, 00:16:39.785 "poll_groups": [ 00:16:39.785 { 00:16:39.785 "name": "nvmf_tgt_poll_group_000", 00:16:39.785 "admin_qpairs": 0, 00:16:39.785 "io_qpairs": 0, 00:16:39.785 "current_admin_qpairs": 0, 00:16:39.785 "current_io_qpairs": 0, 00:16:39.785 "pending_bdev_io": 0, 00:16:39.785 "completed_nvme_io": 0, 00:16:39.785 "transports": [] 00:16:39.785 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_001", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [] 00:16:39.786 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_002", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [] 00:16:39.786 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_003", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [] 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 }' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.786 [2024-12-15 06:06:59.887735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:39.786 "tick_rate": 2100000000, 00:16:39.786 "poll_groups": [ 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_000", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [ 00:16:39.786 { 00:16:39.786 "trtype": "TCP" 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_001", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [ 00:16:39.786 { 00:16:39.786 "trtype": "TCP" 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_002", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [ 00:16:39.786 { 00:16:39.786 "trtype": "TCP" 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 }, 00:16:39.786 { 00:16:39.786 "name": "nvmf_tgt_poll_group_003", 00:16:39.786 "admin_qpairs": 0, 00:16:39.786 "io_qpairs": 0, 00:16:39.786 "current_admin_qpairs": 0, 00:16:39.786 "current_io_qpairs": 0, 00:16:39.786 "pending_bdev_io": 0, 00:16:39.786 "completed_nvme_io": 0, 00:16:39.786 "transports": [ 00:16:39.786 { 00:16:39.786 "trtype": "TCP" 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 } 00:16:39.786 ] 00:16:39.786 }' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:39.786 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:06:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 Malloc1 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 [2024-12-15 06:07:00.048979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:40.045 [2024-12-15 06:07:00.083656] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:40.045 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:40.045 could not add new controller: failed to write to nvme-fabrics device 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.045 06:07:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.421 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.421 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.421 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.421 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:41.421 06:07:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.325 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:43.326 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.585 [2024-12-15 06:07:03.468935] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:43.585 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:43.585 could not add new controller: failed to write to nvme-fabrics device 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.585 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.961 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.961 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:44.961 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.961 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:44.961 06:07:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.863 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.864 [2024-12-15 06:07:06.839294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.864 06:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.241 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.241 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.241 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.241 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.241 06:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:50.215 06:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 [2024-12-15 06:07:10.132523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.215 06:07:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.152 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.152 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.152 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.152 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.152 06:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.685 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 [2024-12-15 06:07:13.407085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.686 06:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.624 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.624 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.624 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.624 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:54.624 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.527 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 [2024-12-15 06:07:16.708548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.786 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.722 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.722 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:57.722 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.722 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:57.722 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 [2024-12-15 06:07:20.016697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:01.193 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:01.193 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:01.193 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.193 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:01.193 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:03.098 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:03.098 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:03.098 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.357 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 [2024-12-15 06:07:23.373036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 [2024-12-15 06:07:23.425103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 [2024-12-15 06:07:23.473249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.358 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 [2024-12-15 06:07:23.521418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 [2024-12-15 06:07:23.573593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.618 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:03.618 "tick_rate": 2100000000, 00:17:03.618 "poll_groups": [ 00:17:03.618 { 00:17:03.618 "name": "nvmf_tgt_poll_group_000", 00:17:03.618 "admin_qpairs": 2, 00:17:03.618 "io_qpairs": 168, 00:17:03.618 "current_admin_qpairs": 0, 00:17:03.618 "current_io_qpairs": 0, 00:17:03.618 "pending_bdev_io": 0, 00:17:03.618 "completed_nvme_io": 219, 00:17:03.618 "transports": [ 00:17:03.618 { 00:17:03.618 "trtype": "TCP" 00:17:03.618 } 00:17:03.618 ] 00:17:03.618 }, 00:17:03.618 { 00:17:03.618 "name": "nvmf_tgt_poll_group_001", 00:17:03.619 "admin_qpairs": 2, 00:17:03.619 "io_qpairs": 168, 00:17:03.619 "current_admin_qpairs": 0, 00:17:03.619 "current_io_qpairs": 0, 00:17:03.619 "pending_bdev_io": 0, 00:17:03.619 "completed_nvme_io": 267, 00:17:03.619 "transports": [ 00:17:03.619 { 00:17:03.619 "trtype": "TCP" 00:17:03.619 } 00:17:03.619 ] 00:17:03.619 }, 00:17:03.619 { 00:17:03.619 "name": "nvmf_tgt_poll_group_002", 00:17:03.619 "admin_qpairs": 1, 00:17:03.619 "io_qpairs": 168, 00:17:03.619 "current_admin_qpairs": 0, 00:17:03.619 "current_io_qpairs": 0, 00:17:03.619 "pending_bdev_io": 0, 00:17:03.619 "completed_nvme_io": 366, 00:17:03.619 "transports": [ 00:17:03.619 { 00:17:03.619 "trtype": "TCP" 00:17:03.619 } 00:17:03.619 ] 00:17:03.619 }, 00:17:03.619 { 00:17:03.619 "name": "nvmf_tgt_poll_group_003", 00:17:03.619 "admin_qpairs": 2, 00:17:03.619 "io_qpairs": 168, 00:17:03.619 "current_admin_qpairs": 0, 00:17:03.619 "current_io_qpairs": 0, 00:17:03.619 "pending_bdev_io": 0, 00:17:03.619 "completed_nvme_io": 170, 00:17:03.619 "transports": [ 00:17:03.619 { 00:17:03.619 "trtype": "TCP" 00:17:03.619 } 00:17:03.619 ] 00:17:03.619 } 00:17:03.619 ] 00:17:03.619 }' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.619 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.619 rmmod nvme_tcp 00:17:03.619 rmmod nvme_fabrics 00:17:03.878 rmmod nvme_keyring 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 935720 ']' 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 935720 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 935720 ']' 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 935720 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935720 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935720' 00:17:03.878 killing process with pid 935720 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 935720 00:17:03.878 06:07:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 935720 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.137 06:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:06.042 00:17:06.042 real 0m32.792s 00:17:06.042 user 1m38.790s 00:17:06.042 sys 0m6.504s 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.042 ************************************ 00:17:06.042 END TEST nvmf_rpc 00:17:06.042 ************************************ 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.042 06:07:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.302 ************************************ 00:17:06.302 START TEST nvmf_invalid 00:17:06.302 ************************************ 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:06.302 * Looking for test storage... 00:17:06.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:06.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.302 --rc genhtml_branch_coverage=1 00:17:06.302 --rc genhtml_function_coverage=1 00:17:06.302 --rc genhtml_legend=1 00:17:06.302 --rc geninfo_all_blocks=1 00:17:06.302 --rc geninfo_unexecuted_blocks=1 00:17:06.302 00:17:06.302 ' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:06.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.302 --rc genhtml_branch_coverage=1 00:17:06.302 --rc genhtml_function_coverage=1 00:17:06.302 --rc genhtml_legend=1 00:17:06.302 --rc geninfo_all_blocks=1 00:17:06.302 --rc geninfo_unexecuted_blocks=1 00:17:06.302 00:17:06.302 ' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:06.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.302 --rc genhtml_branch_coverage=1 00:17:06.302 --rc genhtml_function_coverage=1 00:17:06.302 --rc genhtml_legend=1 00:17:06.302 --rc geninfo_all_blocks=1 00:17:06.302 --rc geninfo_unexecuted_blocks=1 00:17:06.302 00:17:06.302 ' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:06.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.302 --rc genhtml_branch_coverage=1 00:17:06.302 --rc genhtml_function_coverage=1 00:17:06.302 --rc genhtml_legend=1 00:17:06.302 --rc geninfo_all_blocks=1 00:17:06.302 --rc geninfo_unexecuted_blocks=1 00:17:06.302 00:17:06.302 ' 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.302 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.303 06:07:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:12.873 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:12.873 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:12.873 Found net devices under 0000:af:00.0: cvl_0_0 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.873 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:12.874 Found net devices under 0000:af:00.1: cvl_0_1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:12.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:17:12.874 00:17:12.874 --- 10.0.0.2 ping statistics --- 00:17:12.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.874 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:17:12.874 00:17:12.874 --- 10.0.0.1 ping statistics --- 00:17:12.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.874 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=943186 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 943186 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 943186 ']' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.874 [2024-12-15 06:07:32.401609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:12.874 [2024-12-15 06:07:32.401658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.874 [2024-12-15 06:07:32.479306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.874 [2024-12-15 06:07:32.503457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.874 [2024-12-15 06:07:32.503490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.874 [2024-12-15 06:07:32.503497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.874 [2024-12-15 06:07:32.503503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.874 [2024-12-15 06:07:32.503508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.874 [2024-12-15 06:07:32.504922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.874 [2024-12-15 06:07:32.505033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.874 [2024-12-15 06:07:32.505139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.874 [2024-12-15 06:07:32.505140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11699 00:17:12.874 [2024-12-15 06:07:32.802329] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:12.874 { 00:17:12.874 "nqn": "nqn.2016-06.io.spdk:cnode11699", 00:17:12.874 "tgt_name": "foobar", 00:17:12.874 "method": "nvmf_create_subsystem", 00:17:12.874 "req_id": 1 00:17:12.874 } 00:17:12.874 Got JSON-RPC error response 00:17:12.874 response: 00:17:12.874 { 00:17:12.874 "code": -32603, 00:17:12.874 "message": "Unable to find target foobar" 00:17:12.874 }' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:12.874 { 00:17:12.874 "nqn": "nqn.2016-06.io.spdk:cnode11699", 00:17:12.874 "tgt_name": "foobar", 00:17:12.874 "method": "nvmf_create_subsystem", 00:17:12.874 "req_id": 1 00:17:12.874 } 00:17:12.874 Got JSON-RPC error response 00:17:12.874 response: 00:17:12.874 { 00:17:12.874 "code": -32603, 00:17:12.874 "message": "Unable to find target foobar" 00:17:12.874 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:12.874 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14256 00:17:13.133 [2024-12-15 06:07:33.019048] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14256: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:13.133 { 00:17:13.133 "nqn": "nqn.2016-06.io.spdk:cnode14256", 00:17:13.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:13.133 "method": "nvmf_create_subsystem", 00:17:13.133 "req_id": 1 00:17:13.133 } 00:17:13.133 Got JSON-RPC error response 00:17:13.133 response: 00:17:13.133 { 00:17:13.133 "code": -32602, 00:17:13.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:13.133 }' 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:13.133 { 00:17:13.133 "nqn": "nqn.2016-06.io.spdk:cnode14256", 00:17:13.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:13.133 "method": "nvmf_create_subsystem", 00:17:13.133 "req_id": 1 00:17:13.133 } 00:17:13.133 Got JSON-RPC error response 00:17:13.133 response: 00:17:13.133 { 00:17:13.133 "code": -32602, 00:17:13.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:13.133 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14688 00:17:13.133 [2024-12-15 06:07:33.231752] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14688: invalid model number 'SPDK_Controller' 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:13.133 { 00:17:13.133 "nqn": "nqn.2016-06.io.spdk:cnode14688", 00:17:13.133 "model_number": "SPDK_Controller\u001f", 00:17:13.133 "method": "nvmf_create_subsystem", 00:17:13.133 "req_id": 1 00:17:13.133 } 00:17:13.133 Got JSON-RPC error response 00:17:13.133 response: 00:17:13.133 { 00:17:13.133 "code": -32602, 00:17:13.133 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.133 }' 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:13.133 { 00:17:13.133 "nqn": "nqn.2016-06.io.spdk:cnode14688", 00:17:13.133 "model_number": "SPDK_Controller\u001f", 00:17:13.133 "method": "nvmf_create_subsystem", 00:17:13.133 "req_id": 1 00:17:13.133 } 00:17:13.133 Got JSON-RPC error response 00:17:13.133 response: 00:17:13.133 { 00:17:13.133 "code": -32602, 00:17:13.133 "message": "Invalid MN SPDK_Controller\u001f" 00:17:13.133 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:13.133 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-Ss$s Mbn"KnjHQ+}:.i4' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-Ss$s Mbn"KnjHQ+}:.i4' 00:17:13.392 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-Ss$s Mbn"KnjHQ+}:.i4' nqn.2016-06.io.spdk:cnode26224 00:17:13.652 [2024-12-15 06:07:33.572922] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26224: invalid serial number '\-Ss$s Mbn"KnjHQ+}:.i4' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:13.652 { 00:17:13.652 "nqn": "nqn.2016-06.io.spdk:cnode26224", 00:17:13.652 "serial_number": "\\-Ss$s Mbn\"KnjHQ+}:.i4", 00:17:13.652 "method": "nvmf_create_subsystem", 00:17:13.652 "req_id": 1 00:17:13.652 } 00:17:13.652 Got JSON-RPC error response 00:17:13.652 response: 00:17:13.652 { 00:17:13.652 "code": -32602, 00:17:13.652 "message": "Invalid SN \\-Ss$s Mbn\"KnjHQ+}:.i4" 00:17:13.652 }' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:13.652 { 00:17:13.652 "nqn": "nqn.2016-06.io.spdk:cnode26224", 00:17:13.652 "serial_number": "\\-Ss$s Mbn\"KnjHQ+}:.i4", 00:17:13.652 "method": "nvmf_create_subsystem", 00:17:13.652 "req_id": 1 00:17:13.652 } 00:17:13.652 Got JSON-RPC error response 00:17:13.652 response: 00:17:13.652 { 00:17:13.652 "code": -32602, 00:17:13.652 "message": "Invalid SN \\-Ss$s Mbn\"KnjHQ+}:.i4" 00:17:13.652 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.652 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.653 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:17:13.944 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"+ /dev/null' 00:17:16.115 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.022 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:18.022 00:17:18.022 real 0m11.973s 00:17:18.022 user 0m18.630s 00:17:18.022 sys 0m5.278s 00:17:18.022 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.022 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:18.022 ************************************ 00:17:18.022 END TEST nvmf_invalid 00:17:18.022 ************************************ 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.282 ************************************ 00:17:18.282 START TEST nvmf_connect_stress 00:17:18.282 ************************************ 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:18.282 * Looking for test storage... 00:17:18.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:18.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.282 --rc genhtml_branch_coverage=1 00:17:18.282 --rc genhtml_function_coverage=1 00:17:18.282 --rc genhtml_legend=1 00:17:18.282 --rc geninfo_all_blocks=1 00:17:18.282 --rc geninfo_unexecuted_blocks=1 00:17:18.282 00:17:18.282 ' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:18.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.282 --rc genhtml_branch_coverage=1 00:17:18.282 --rc genhtml_function_coverage=1 00:17:18.282 --rc genhtml_legend=1 00:17:18.282 --rc geninfo_all_blocks=1 00:17:18.282 --rc geninfo_unexecuted_blocks=1 00:17:18.282 00:17:18.282 ' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:18.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.282 --rc genhtml_branch_coverage=1 00:17:18.282 --rc genhtml_function_coverage=1 00:17:18.282 --rc genhtml_legend=1 00:17:18.282 --rc geninfo_all_blocks=1 00:17:18.282 --rc geninfo_unexecuted_blocks=1 00:17:18.282 00:17:18.282 ' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:18.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.282 --rc genhtml_branch_coverage=1 00:17:18.282 --rc genhtml_function_coverage=1 00:17:18.282 --rc genhtml_legend=1 00:17:18.282 --rc geninfo_all_blocks=1 00:17:18.282 --rc geninfo_unexecuted_blocks=1 00:17:18.282 00:17:18.282 ' 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.282 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.283 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.283 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:18.283 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:18.283 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.283 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:18.542 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:18.543 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:25.136 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:25.136 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:25.136 Found net devices under 0000:af:00.0: cvl_0_0 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:25.136 Found net devices under 0000:af:00.1: cvl_0_1 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:25.136 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:25.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:17:25.137 00:17:25.137 --- 10.0.0.2 ping statistics --- 00:17:25.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.137 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:25.137 00:17:25.137 --- 10.0.0.1 ping statistics --- 00:17:25.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.137 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=947496 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 947496 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 947496 ']' 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 [2024-12-15 06:07:44.540606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:25.137 [2024-12-15 06:07:44.540657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.137 [2024-12-15 06:07:44.618822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.137 [2024-12-15 06:07:44.641528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.137 [2024-12-15 06:07:44.641563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.137 [2024-12-15 06:07:44.641570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.137 [2024-12-15 06:07:44.641576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.137 [2024-12-15 06:07:44.641581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.137 [2024-12-15 06:07:44.642890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.137 [2024-12-15 06:07:44.642999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.137 [2024-12-15 06:07:44.643014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 [2024-12-15 06:07:44.774747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 [2024-12-15 06:07:44.794966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 NULL1 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=947523 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.137 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.138 06:07:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.138 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.138 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:25.138 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.138 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.138 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.706 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.706 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:25.706 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.706 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.706 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.965 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.966 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:25.966 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.966 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.966 06:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.225 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.225 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:26.225 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.225 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.225 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.485 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.485 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:26.485 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.485 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.485 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.745 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.745 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:26.745 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.745 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.745 06:07:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.314 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.314 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:27.314 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.314 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.314 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.573 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.574 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:27.574 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.574 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.574 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.833 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.833 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:27.833 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.833 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.833 06:07:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.093 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.093 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:28.093 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.093 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.093 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.352 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.352 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:28.352 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.352 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.352 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.921 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.921 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:28.921 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.921 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.921 06:07:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.180 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.180 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:29.180 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.180 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.180 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.439 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.439 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:29.439 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.439 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.439 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.698 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.698 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:29.698 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.698 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.698 06:07:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.267 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.267 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:30.267 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.267 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.267 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.526 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.526 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:30.526 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.526 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.526 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.785 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.785 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:30.785 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.785 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.785 06:07:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.044 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.044 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:31.044 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.044 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.044 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.304 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.304 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:31.304 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.304 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.304 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.873 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.873 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:31.873 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.873 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.873 06:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.131 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.131 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:32.131 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.132 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.132 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.390 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.390 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:32.391 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.391 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.391 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.650 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.650 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:32.650 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.650 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.650 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.908 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.908 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:32.908 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.909 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.909 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.476 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.476 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:33.476 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.476 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.476 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.735 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.735 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:33.735 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.735 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.735 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.994 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.994 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:33.994 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.994 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.994 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.253 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.253 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:34.253 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.253 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.253 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.821 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.821 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:34.821 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.821 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.821 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.821 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947523 00:17:35.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (947523) - No such process 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 947523 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.081 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.081 rmmod nvme_tcp 00:17:35.081 rmmod nvme_fabrics 00:17:35.081 rmmod nvme_keyring 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 947496 ']' 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 947496 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 947496 ']' 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 947496 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947496 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947496' 00:17:35.081 killing process with pid 947496 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 947496 00:17:35.081 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 947496 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.341 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.248 00:17:37.248 real 0m19.100s 00:17:37.248 user 0m39.318s 00:17:37.248 sys 0m8.616s 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.248 ************************************ 00:17:37.248 END TEST nvmf_connect_stress 00:17:37.248 ************************************ 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.248 06:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.508 ************************************ 00:17:37.508 START TEST nvmf_fused_ordering 00:17:37.508 ************************************ 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:37.508 * Looking for test storage... 00:17:37.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.508 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.509 --rc genhtml_branch_coverage=1 00:17:37.509 --rc genhtml_function_coverage=1 00:17:37.509 --rc genhtml_legend=1 00:17:37.509 --rc geninfo_all_blocks=1 00:17:37.509 --rc geninfo_unexecuted_blocks=1 00:17:37.509 00:17:37.509 ' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.509 --rc genhtml_branch_coverage=1 00:17:37.509 --rc genhtml_function_coverage=1 00:17:37.509 --rc genhtml_legend=1 00:17:37.509 --rc geninfo_all_blocks=1 00:17:37.509 --rc geninfo_unexecuted_blocks=1 00:17:37.509 00:17:37.509 ' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.509 --rc genhtml_branch_coverage=1 00:17:37.509 --rc genhtml_function_coverage=1 00:17:37.509 --rc genhtml_legend=1 00:17:37.509 --rc geninfo_all_blocks=1 00:17:37.509 --rc geninfo_unexecuted_blocks=1 00:17:37.509 00:17:37.509 ' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.509 --rc genhtml_branch_coverage=1 00:17:37.509 --rc genhtml_function_coverage=1 00:17:37.509 --rc genhtml_legend=1 00:17:37.509 --rc geninfo_all_blocks=1 00:17:37.509 --rc geninfo_unexecuted_blocks=1 00:17:37.509 00:17:37.509 ' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.509 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.510 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:44.084 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:44.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.084 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:44.085 Found net devices under 0000:af:00.0: cvl_0_0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:44.085 Found net devices under 0000:af:00.1: cvl_0_1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:17:44.085 00:17:44.085 --- 10.0.0.2 ping statistics --- 00:17:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.085 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:17:44.085 00:17:44.085 --- 10.0.0.1 ping statistics --- 00:17:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.085 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=952579 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 952579 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 952579 ']' 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.085 [2024-12-15 06:08:03.581290] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:44.085 [2024-12-15 06:08:03.581338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.085 [2024-12-15 06:08:03.659786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.085 [2024-12-15 06:08:03.680652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.085 [2024-12-15 06:08:03.680689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.085 [2024-12-15 06:08:03.680699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.085 [2024-12-15 06:08:03.680705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.085 [2024-12-15 06:08:03.680710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.085 [2024-12-15 06:08:03.681246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.085 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 [2024-12-15 06:08:03.810908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 [2024-12-15 06:08:03.831103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 NULL1 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.086 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:44.086 [2024-12-15 06:08:03.889494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:44.086 [2024-12-15 06:08:03.889525] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952768 ] 00:17:44.086 Attached to nqn.2016-06.io.spdk:cnode1 00:17:44.086 Namespace ID: 1 size: 1GB 00:17:44.086 fused_ordering(0) 00:17:44.086 fused_ordering(1) 00:17:44.086 fused_ordering(2) 00:17:44.086 fused_ordering(3) 00:17:44.086 fused_ordering(4) 00:17:44.086 fused_ordering(5) 00:17:44.086 fused_ordering(6) 00:17:44.086 fused_ordering(7) 00:17:44.086 fused_ordering(8) 00:17:44.086 fused_ordering(9) 00:17:44.086 fused_ordering(10) 00:17:44.086 fused_ordering(11) 00:17:44.086 fused_ordering(12) 00:17:44.086 fused_ordering(13) 00:17:44.086 fused_ordering(14) 00:17:44.086 fused_ordering(15) 00:17:44.086 fused_ordering(16) 00:17:44.086 fused_ordering(17) 00:17:44.086 fused_ordering(18) 00:17:44.086 fused_ordering(19) 00:17:44.086 fused_ordering(20) 00:17:44.086 fused_ordering(21) 00:17:44.086 fused_ordering(22) 00:17:44.086 fused_ordering(23) 00:17:44.086 fused_ordering(24) 00:17:44.086 fused_ordering(25) 00:17:44.086 fused_ordering(26) 00:17:44.086 fused_ordering(27) 00:17:44.086 fused_ordering(28) 00:17:44.086 fused_ordering(29) 00:17:44.086 fused_ordering(30) 00:17:44.086 fused_ordering(31) 00:17:44.086 fused_ordering(32) 00:17:44.086 fused_ordering(33) 00:17:44.086 fused_ordering(34) 00:17:44.086 fused_ordering(35) 00:17:44.086 fused_ordering(36) 00:17:44.086 fused_ordering(37) 00:17:44.086 fused_ordering(38) 00:17:44.086 fused_ordering(39) 00:17:44.086 fused_ordering(40) 00:17:44.086 fused_ordering(41) 00:17:44.086 fused_ordering(42) 00:17:44.086 fused_ordering(43) 00:17:44.086 fused_ordering(44) 00:17:44.086 fused_ordering(45) 00:17:44.086 fused_ordering(46) 00:17:44.086 fused_ordering(47) 00:17:44.086 fused_ordering(48) 00:17:44.086 fused_ordering(49) 00:17:44.086 fused_ordering(50) 00:17:44.086 fused_ordering(51) 00:17:44.086 fused_ordering(52) 00:17:44.086 fused_ordering(53) 00:17:44.086 fused_ordering(54) 00:17:44.086 fused_ordering(55) 00:17:44.086 fused_ordering(56) 00:17:44.086 fused_ordering(57) 00:17:44.086 fused_ordering(58) 00:17:44.086 fused_ordering(59) 00:17:44.086 fused_ordering(60) 00:17:44.086 fused_ordering(61) 00:17:44.086 fused_ordering(62) 00:17:44.086 fused_ordering(63) 00:17:44.086 fused_ordering(64) 00:17:44.086 fused_ordering(65) 00:17:44.086 fused_ordering(66) 00:17:44.086 fused_ordering(67) 00:17:44.086 fused_ordering(68) 00:17:44.086 fused_ordering(69) 00:17:44.086 fused_ordering(70) 00:17:44.086 fused_ordering(71) 00:17:44.086 fused_ordering(72) 00:17:44.086 fused_ordering(73) 00:17:44.086 fused_ordering(74) 00:17:44.086 fused_ordering(75) 00:17:44.086 fused_ordering(76) 00:17:44.086 fused_ordering(77) 00:17:44.086 fused_ordering(78) 00:17:44.086 fused_ordering(79) 00:17:44.086 fused_ordering(80) 00:17:44.086 fused_ordering(81) 00:17:44.086 fused_ordering(82) 00:17:44.086 fused_ordering(83) 00:17:44.086 fused_ordering(84) 00:17:44.086 fused_ordering(85) 00:17:44.086 fused_ordering(86) 00:17:44.086 fused_ordering(87) 00:17:44.086 fused_ordering(88) 00:17:44.086 fused_ordering(89) 00:17:44.086 fused_ordering(90) 00:17:44.086 fused_ordering(91) 00:17:44.086 fused_ordering(92) 00:17:44.086 fused_ordering(93) 00:17:44.086 fused_ordering(94) 00:17:44.086 fused_ordering(95) 00:17:44.086 fused_ordering(96) 00:17:44.086 fused_ordering(97) 00:17:44.086 fused_ordering(98) 00:17:44.086 fused_ordering(99) 00:17:44.086 fused_ordering(100) 00:17:44.086 fused_ordering(101) 00:17:44.086 fused_ordering(102) 00:17:44.086 fused_ordering(103) 00:17:44.086 fused_ordering(104) 00:17:44.086 fused_ordering(105) 00:17:44.086 fused_ordering(106) 00:17:44.086 fused_ordering(107) 00:17:44.086 fused_ordering(108) 00:17:44.086 fused_ordering(109) 00:17:44.086 fused_ordering(110) 00:17:44.086 fused_ordering(111) 00:17:44.086 fused_ordering(112) 00:17:44.086 fused_ordering(113) 00:17:44.086 fused_ordering(114) 00:17:44.086 fused_ordering(115) 00:17:44.086 fused_ordering(116) 00:17:44.086 fused_ordering(117) 00:17:44.086 fused_ordering(118) 00:17:44.086 fused_ordering(119) 00:17:44.086 fused_ordering(120) 00:17:44.086 fused_ordering(121) 00:17:44.086 fused_ordering(122) 00:17:44.086 fused_ordering(123) 00:17:44.086 fused_ordering(124) 00:17:44.086 fused_ordering(125) 00:17:44.086 fused_ordering(126) 00:17:44.086 fused_ordering(127) 00:17:44.086 fused_ordering(128) 00:17:44.086 fused_ordering(129) 00:17:44.086 fused_ordering(130) 00:17:44.086 fused_ordering(131) 00:17:44.086 fused_ordering(132) 00:17:44.086 fused_ordering(133) 00:17:44.086 fused_ordering(134) 00:17:44.086 fused_ordering(135) 00:17:44.086 fused_ordering(136) 00:17:44.086 fused_ordering(137) 00:17:44.086 fused_ordering(138) 00:17:44.086 fused_ordering(139) 00:17:44.086 fused_ordering(140) 00:17:44.087 fused_ordering(141) 00:17:44.087 fused_ordering(142) 00:17:44.087 fused_ordering(143) 00:17:44.087 fused_ordering(144) 00:17:44.087 fused_ordering(145) 00:17:44.087 fused_ordering(146) 00:17:44.087 fused_ordering(147) 00:17:44.087 fused_ordering(148) 00:17:44.087 fused_ordering(149) 00:17:44.087 fused_ordering(150) 00:17:44.087 fused_ordering(151) 00:17:44.087 fused_ordering(152) 00:17:44.087 fused_ordering(153) 00:17:44.087 fused_ordering(154) 00:17:44.087 fused_ordering(155) 00:17:44.087 fused_ordering(156) 00:17:44.087 fused_ordering(157) 00:17:44.087 fused_ordering(158) 00:17:44.087 fused_ordering(159) 00:17:44.087 fused_ordering(160) 00:17:44.087 fused_ordering(161) 00:17:44.087 fused_ordering(162) 00:17:44.087 fused_ordering(163) 00:17:44.087 fused_ordering(164) 00:17:44.087 fused_ordering(165) 00:17:44.087 fused_ordering(166) 00:17:44.087 fused_ordering(167) 00:17:44.087 fused_ordering(168) 00:17:44.087 fused_ordering(169) 00:17:44.087 fused_ordering(170) 00:17:44.087 fused_ordering(171) 00:17:44.087 fused_ordering(172) 00:17:44.087 fused_ordering(173) 00:17:44.087 fused_ordering(174) 00:17:44.087 fused_ordering(175) 00:17:44.087 fused_ordering(176) 00:17:44.087 fused_ordering(177) 00:17:44.087 fused_ordering(178) 00:17:44.087 fused_ordering(179) 00:17:44.087 fused_ordering(180) 00:17:44.087 fused_ordering(181) 00:17:44.087 fused_ordering(182) 00:17:44.087 fused_ordering(183) 00:17:44.087 fused_ordering(184) 00:17:44.087 fused_ordering(185) 00:17:44.087 fused_ordering(186) 00:17:44.087 fused_ordering(187) 00:17:44.087 fused_ordering(188) 00:17:44.087 fused_ordering(189) 00:17:44.087 fused_ordering(190) 00:17:44.087 fused_ordering(191) 00:17:44.087 fused_ordering(192) 00:17:44.087 fused_ordering(193) 00:17:44.087 fused_ordering(194) 00:17:44.087 fused_ordering(195) 00:17:44.087 fused_ordering(196) 00:17:44.087 fused_ordering(197) 00:17:44.087 fused_ordering(198) 00:17:44.087 fused_ordering(199) 00:17:44.087 fused_ordering(200) 00:17:44.087 fused_ordering(201) 00:17:44.087 fused_ordering(202) 00:17:44.087 fused_ordering(203) 00:17:44.087 fused_ordering(204) 00:17:44.087 fused_ordering(205) 00:17:44.346 fused_ordering(206) 00:17:44.346 fused_ordering(207) 00:17:44.346 fused_ordering(208) 00:17:44.346 fused_ordering(209) 00:17:44.346 fused_ordering(210) 00:17:44.346 fused_ordering(211) 00:17:44.346 fused_ordering(212) 00:17:44.346 fused_ordering(213) 00:17:44.346 fused_ordering(214) 00:17:44.346 fused_ordering(215) 00:17:44.346 fused_ordering(216) 00:17:44.346 fused_ordering(217) 00:17:44.346 fused_ordering(218) 00:17:44.346 fused_ordering(219) 00:17:44.346 fused_ordering(220) 00:17:44.346 fused_ordering(221) 00:17:44.346 fused_ordering(222) 00:17:44.346 fused_ordering(223) 00:17:44.346 fused_ordering(224) 00:17:44.346 fused_ordering(225) 00:17:44.346 fused_ordering(226) 00:17:44.346 fused_ordering(227) 00:17:44.346 fused_ordering(228) 00:17:44.346 fused_ordering(229) 00:17:44.346 fused_ordering(230) 00:17:44.346 fused_ordering(231) 00:17:44.346 fused_ordering(232) 00:17:44.346 fused_ordering(233) 00:17:44.346 fused_ordering(234) 00:17:44.346 fused_ordering(235) 00:17:44.346 fused_ordering(236) 00:17:44.346 fused_ordering(237) 00:17:44.346 fused_ordering(238) 00:17:44.346 fused_ordering(239) 00:17:44.346 fused_ordering(240) 00:17:44.346 fused_ordering(241) 00:17:44.346 fused_ordering(242) 00:17:44.346 fused_ordering(243) 00:17:44.346 fused_ordering(244) 00:17:44.346 fused_ordering(245) 00:17:44.346 fused_ordering(246) 00:17:44.346 fused_ordering(247) 00:17:44.346 fused_ordering(248) 00:17:44.346 fused_ordering(249) 00:17:44.346 fused_ordering(250) 00:17:44.346 fused_ordering(251) 00:17:44.346 fused_ordering(252) 00:17:44.346 fused_ordering(253) 00:17:44.346 fused_ordering(254) 00:17:44.346 fused_ordering(255) 00:17:44.346 fused_ordering(256) 00:17:44.346 fused_ordering(257) 00:17:44.346 fused_ordering(258) 00:17:44.346 fused_ordering(259) 00:17:44.346 fused_ordering(260) 00:17:44.346 fused_ordering(261) 00:17:44.346 fused_ordering(262) 00:17:44.346 fused_ordering(263) 00:17:44.346 fused_ordering(264) 00:17:44.346 fused_ordering(265) 00:17:44.347 fused_ordering(266) 00:17:44.347 fused_ordering(267) 00:17:44.347 fused_ordering(268) 00:17:44.347 fused_ordering(269) 00:17:44.347 fused_ordering(270) 00:17:44.347 fused_ordering(271) 00:17:44.347 fused_ordering(272) 00:17:44.347 fused_ordering(273) 00:17:44.347 fused_ordering(274) 00:17:44.347 fused_ordering(275) 00:17:44.347 fused_ordering(276) 00:17:44.347 fused_ordering(277) 00:17:44.347 fused_ordering(278) 00:17:44.347 fused_ordering(279) 00:17:44.347 fused_ordering(280) 00:17:44.347 fused_ordering(281) 00:17:44.347 fused_ordering(282) 00:17:44.347 fused_ordering(283) 00:17:44.347 fused_ordering(284) 00:17:44.347 fused_ordering(285) 00:17:44.347 fused_ordering(286) 00:17:44.347 fused_ordering(287) 00:17:44.347 fused_ordering(288) 00:17:44.347 fused_ordering(289) 00:17:44.347 fused_ordering(290) 00:17:44.347 fused_ordering(291) 00:17:44.347 fused_ordering(292) 00:17:44.347 fused_ordering(293) 00:17:44.347 fused_ordering(294) 00:17:44.347 fused_ordering(295) 00:17:44.347 fused_ordering(296) 00:17:44.347 fused_ordering(297) 00:17:44.347 fused_ordering(298) 00:17:44.347 fused_ordering(299) 00:17:44.347 fused_ordering(300) 00:17:44.347 fused_ordering(301) 00:17:44.347 fused_ordering(302) 00:17:44.347 fused_ordering(303) 00:17:44.347 fused_ordering(304) 00:17:44.347 fused_ordering(305) 00:17:44.347 fused_ordering(306) 00:17:44.347 fused_ordering(307) 00:17:44.347 fused_ordering(308) 00:17:44.347 fused_ordering(309) 00:17:44.347 fused_ordering(310) 00:17:44.347 fused_ordering(311) 00:17:44.347 fused_ordering(312) 00:17:44.347 fused_ordering(313) 00:17:44.347 fused_ordering(314) 00:17:44.347 fused_ordering(315) 00:17:44.347 fused_ordering(316) 00:17:44.347 fused_ordering(317) 00:17:44.347 fused_ordering(318) 00:17:44.347 fused_ordering(319) 00:17:44.347 fused_ordering(320) 00:17:44.347 fused_ordering(321) 00:17:44.347 fused_ordering(322) 00:17:44.347 fused_ordering(323) 00:17:44.347 fused_ordering(324) 00:17:44.347 fused_ordering(325) 00:17:44.347 fused_ordering(326) 00:17:44.347 fused_ordering(327) 00:17:44.347 fused_ordering(328) 00:17:44.347 fused_ordering(329) 00:17:44.347 fused_ordering(330) 00:17:44.347 fused_ordering(331) 00:17:44.347 fused_ordering(332) 00:17:44.347 fused_ordering(333) 00:17:44.347 fused_ordering(334) 00:17:44.347 fused_ordering(335) 00:17:44.347 fused_ordering(336) 00:17:44.347 fused_ordering(337) 00:17:44.347 fused_ordering(338) 00:17:44.347 fused_ordering(339) 00:17:44.347 fused_ordering(340) 00:17:44.347 fused_ordering(341) 00:17:44.347 fused_ordering(342) 00:17:44.347 fused_ordering(343) 00:17:44.347 fused_ordering(344) 00:17:44.347 fused_ordering(345) 00:17:44.347 fused_ordering(346) 00:17:44.347 fused_ordering(347) 00:17:44.347 fused_ordering(348) 00:17:44.347 fused_ordering(349) 00:17:44.347 fused_ordering(350) 00:17:44.347 fused_ordering(351) 00:17:44.347 fused_ordering(352) 00:17:44.347 fused_ordering(353) 00:17:44.347 fused_ordering(354) 00:17:44.347 fused_ordering(355) 00:17:44.347 fused_ordering(356) 00:17:44.347 fused_ordering(357) 00:17:44.347 fused_ordering(358) 00:17:44.347 fused_ordering(359) 00:17:44.347 fused_ordering(360) 00:17:44.347 fused_ordering(361) 00:17:44.347 fused_ordering(362) 00:17:44.347 fused_ordering(363) 00:17:44.347 fused_ordering(364) 00:17:44.347 fused_ordering(365) 00:17:44.347 fused_ordering(366) 00:17:44.347 fused_ordering(367) 00:17:44.347 fused_ordering(368) 00:17:44.347 fused_ordering(369) 00:17:44.347 fused_ordering(370) 00:17:44.347 fused_ordering(371) 00:17:44.347 fused_ordering(372) 00:17:44.347 fused_ordering(373) 00:17:44.347 fused_ordering(374) 00:17:44.347 fused_ordering(375) 00:17:44.347 fused_ordering(376) 00:17:44.347 fused_ordering(377) 00:17:44.347 fused_ordering(378) 00:17:44.347 fused_ordering(379) 00:17:44.347 fused_ordering(380) 00:17:44.347 fused_ordering(381) 00:17:44.347 fused_ordering(382) 00:17:44.347 fused_ordering(383) 00:17:44.347 fused_ordering(384) 00:17:44.347 fused_ordering(385) 00:17:44.347 fused_ordering(386) 00:17:44.347 fused_ordering(387) 00:17:44.347 fused_ordering(388) 00:17:44.347 fused_ordering(389) 00:17:44.347 fused_ordering(390) 00:17:44.347 fused_ordering(391) 00:17:44.347 fused_ordering(392) 00:17:44.347 fused_ordering(393) 00:17:44.347 fused_ordering(394) 00:17:44.347 fused_ordering(395) 00:17:44.347 fused_ordering(396) 00:17:44.347 fused_ordering(397) 00:17:44.347 fused_ordering(398) 00:17:44.347 fused_ordering(399) 00:17:44.347 fused_ordering(400) 00:17:44.347 fused_ordering(401) 00:17:44.347 fused_ordering(402) 00:17:44.347 fused_ordering(403) 00:17:44.347 fused_ordering(404) 00:17:44.347 fused_ordering(405) 00:17:44.347 fused_ordering(406) 00:17:44.347 fused_ordering(407) 00:17:44.347 fused_ordering(408) 00:17:44.347 fused_ordering(409) 00:17:44.347 fused_ordering(410) 00:17:44.606 fused_ordering(411) 00:17:44.606 fused_ordering(412) 00:17:44.606 fused_ordering(413) 00:17:44.606 fused_ordering(414) 00:17:44.606 fused_ordering(415) 00:17:44.606 fused_ordering(416) 00:17:44.606 fused_ordering(417) 00:17:44.606 fused_ordering(418) 00:17:44.606 fused_ordering(419) 00:17:44.606 fused_ordering(420) 00:17:44.606 fused_ordering(421) 00:17:44.606 fused_ordering(422) 00:17:44.606 fused_ordering(423) 00:17:44.606 fused_ordering(424) 00:17:44.606 fused_ordering(425) 00:17:44.606 fused_ordering(426) 00:17:44.606 fused_ordering(427) 00:17:44.606 fused_ordering(428) 00:17:44.606 fused_ordering(429) 00:17:44.606 fused_ordering(430) 00:17:44.606 fused_ordering(431) 00:17:44.606 fused_ordering(432) 00:17:44.606 fused_ordering(433) 00:17:44.606 fused_ordering(434) 00:17:44.606 fused_ordering(435) 00:17:44.606 fused_ordering(436) 00:17:44.606 fused_ordering(437) 00:17:44.606 fused_ordering(438) 00:17:44.606 fused_ordering(439) 00:17:44.606 fused_ordering(440) 00:17:44.606 fused_ordering(441) 00:17:44.606 fused_ordering(442) 00:17:44.607 fused_ordering(443) 00:17:44.607 fused_ordering(444) 00:17:44.607 fused_ordering(445) 00:17:44.607 fused_ordering(446) 00:17:44.607 fused_ordering(447) 00:17:44.607 fused_ordering(448) 00:17:44.607 fused_ordering(449) 00:17:44.607 fused_ordering(450) 00:17:44.607 fused_ordering(451) 00:17:44.607 fused_ordering(452) 00:17:44.607 fused_ordering(453) 00:17:44.607 fused_ordering(454) 00:17:44.607 fused_ordering(455) 00:17:44.607 fused_ordering(456) 00:17:44.607 fused_ordering(457) 00:17:44.607 fused_ordering(458) 00:17:44.607 fused_ordering(459) 00:17:44.607 fused_ordering(460) 00:17:44.607 fused_ordering(461) 00:17:44.607 fused_ordering(462) 00:17:44.607 fused_ordering(463) 00:17:44.607 fused_ordering(464) 00:17:44.607 fused_ordering(465) 00:17:44.607 fused_ordering(466) 00:17:44.607 fused_ordering(467) 00:17:44.607 fused_ordering(468) 00:17:44.607 fused_ordering(469) 00:17:44.607 fused_ordering(470) 00:17:44.607 fused_ordering(471) 00:17:44.607 fused_ordering(472) 00:17:44.607 fused_ordering(473) 00:17:44.607 fused_ordering(474) 00:17:44.607 fused_ordering(475) 00:17:44.607 fused_ordering(476) 00:17:44.607 fused_ordering(477) 00:17:44.607 fused_ordering(478) 00:17:44.607 fused_ordering(479) 00:17:44.607 fused_ordering(480) 00:17:44.607 fused_ordering(481) 00:17:44.607 fused_ordering(482) 00:17:44.607 fused_ordering(483) 00:17:44.607 fused_ordering(484) 00:17:44.607 fused_ordering(485) 00:17:44.607 fused_ordering(486) 00:17:44.607 fused_ordering(487) 00:17:44.607 fused_ordering(488) 00:17:44.607 fused_ordering(489) 00:17:44.607 fused_ordering(490) 00:17:44.607 fused_ordering(491) 00:17:44.607 fused_ordering(492) 00:17:44.607 fused_ordering(493) 00:17:44.607 fused_ordering(494) 00:17:44.607 fused_ordering(495) 00:17:44.607 fused_ordering(496) 00:17:44.607 fused_ordering(497) 00:17:44.607 fused_ordering(498) 00:17:44.607 fused_ordering(499) 00:17:44.607 fused_ordering(500) 00:17:44.607 fused_ordering(501) 00:17:44.607 fused_ordering(502) 00:17:44.607 fused_ordering(503) 00:17:44.607 fused_ordering(504) 00:17:44.607 fused_ordering(505) 00:17:44.607 fused_ordering(506) 00:17:44.607 fused_ordering(507) 00:17:44.607 fused_ordering(508) 00:17:44.607 fused_ordering(509) 00:17:44.607 fused_ordering(510) 00:17:44.607 fused_ordering(511) 00:17:44.607 fused_ordering(512) 00:17:44.607 fused_ordering(513) 00:17:44.607 fused_ordering(514) 00:17:44.607 fused_ordering(515) 00:17:44.607 fused_ordering(516) 00:17:44.607 fused_ordering(517) 00:17:44.607 fused_ordering(518) 00:17:44.607 fused_ordering(519) 00:17:44.607 fused_ordering(520) 00:17:44.607 fused_ordering(521) 00:17:44.607 fused_ordering(522) 00:17:44.607 fused_ordering(523) 00:17:44.607 fused_ordering(524) 00:17:44.607 fused_ordering(525) 00:17:44.607 fused_ordering(526) 00:17:44.607 fused_ordering(527) 00:17:44.607 fused_ordering(528) 00:17:44.607 fused_ordering(529) 00:17:44.607 fused_ordering(530) 00:17:44.607 fused_ordering(531) 00:17:44.607 fused_ordering(532) 00:17:44.607 fused_ordering(533) 00:17:44.607 fused_ordering(534) 00:17:44.607 fused_ordering(535) 00:17:44.607 fused_ordering(536) 00:17:44.607 fused_ordering(537) 00:17:44.607 fused_ordering(538) 00:17:44.607 fused_ordering(539) 00:17:44.607 fused_ordering(540) 00:17:44.607 fused_ordering(541) 00:17:44.607 fused_ordering(542) 00:17:44.607 fused_ordering(543) 00:17:44.607 fused_ordering(544) 00:17:44.607 fused_ordering(545) 00:17:44.607 fused_ordering(546) 00:17:44.607 fused_ordering(547) 00:17:44.607 fused_ordering(548) 00:17:44.607 fused_ordering(549) 00:17:44.607 fused_ordering(550) 00:17:44.607 fused_ordering(551) 00:17:44.607 fused_ordering(552) 00:17:44.607 fused_ordering(553) 00:17:44.607 fused_ordering(554) 00:17:44.607 fused_ordering(555) 00:17:44.607 fused_ordering(556) 00:17:44.607 fused_ordering(557) 00:17:44.607 fused_ordering(558) 00:17:44.607 fused_ordering(559) 00:17:44.607 fused_ordering(560) 00:17:44.607 fused_ordering(561) 00:17:44.607 fused_ordering(562) 00:17:44.607 fused_ordering(563) 00:17:44.607 fused_ordering(564) 00:17:44.607 fused_ordering(565) 00:17:44.607 fused_ordering(566) 00:17:44.607 fused_ordering(567) 00:17:44.607 fused_ordering(568) 00:17:44.607 fused_ordering(569) 00:17:44.607 fused_ordering(570) 00:17:44.607 fused_ordering(571) 00:17:44.607 fused_ordering(572) 00:17:44.607 fused_ordering(573) 00:17:44.607 fused_ordering(574) 00:17:44.607 fused_ordering(575) 00:17:44.607 fused_ordering(576) 00:17:44.607 fused_ordering(577) 00:17:44.607 fused_ordering(578) 00:17:44.607 fused_ordering(579) 00:17:44.607 fused_ordering(580) 00:17:44.607 fused_ordering(581) 00:17:44.607 fused_ordering(582) 00:17:44.607 fused_ordering(583) 00:17:44.607 fused_ordering(584) 00:17:44.607 fused_ordering(585) 00:17:44.607 fused_ordering(586) 00:17:44.607 fused_ordering(587) 00:17:44.607 fused_ordering(588) 00:17:44.607 fused_ordering(589) 00:17:44.607 fused_ordering(590) 00:17:44.607 fused_ordering(591) 00:17:44.607 fused_ordering(592) 00:17:44.607 fused_ordering(593) 00:17:44.607 fused_ordering(594) 00:17:44.607 fused_ordering(595) 00:17:44.607 fused_ordering(596) 00:17:44.607 fused_ordering(597) 00:17:44.607 fused_ordering(598) 00:17:44.607 fused_ordering(599) 00:17:44.607 fused_ordering(600) 00:17:44.607 fused_ordering(601) 00:17:44.607 fused_ordering(602) 00:17:44.607 fused_ordering(603) 00:17:44.607 fused_ordering(604) 00:17:44.607 fused_ordering(605) 00:17:44.607 fused_ordering(606) 00:17:44.607 fused_ordering(607) 00:17:44.607 fused_ordering(608) 00:17:44.607 fused_ordering(609) 00:17:44.607 fused_ordering(610) 00:17:44.607 fused_ordering(611) 00:17:44.607 fused_ordering(612) 00:17:44.607 fused_ordering(613) 00:17:44.607 fused_ordering(614) 00:17:44.607 fused_ordering(615) 00:17:45.175 fused_ordering(616) 00:17:45.175 fused_ordering(617) 00:17:45.175 fused_ordering(618) 00:17:45.175 fused_ordering(619) 00:17:45.175 fused_ordering(620) 00:17:45.175 fused_ordering(621) 00:17:45.175 fused_ordering(622) 00:17:45.175 fused_ordering(623) 00:17:45.175 fused_ordering(624) 00:17:45.175 fused_ordering(625) 00:17:45.175 fused_ordering(626) 00:17:45.175 fused_ordering(627) 00:17:45.175 fused_ordering(628) 00:17:45.175 fused_ordering(629) 00:17:45.175 fused_ordering(630) 00:17:45.175 fused_ordering(631) 00:17:45.175 fused_ordering(632) 00:17:45.175 fused_ordering(633) 00:17:45.175 fused_ordering(634) 00:17:45.175 fused_ordering(635) 00:17:45.175 fused_ordering(636) 00:17:45.175 fused_ordering(637) 00:17:45.175 fused_ordering(638) 00:17:45.175 fused_ordering(639) 00:17:45.175 fused_ordering(640) 00:17:45.175 fused_ordering(641) 00:17:45.175 fused_ordering(642) 00:17:45.175 fused_ordering(643) 00:17:45.175 fused_ordering(644) 00:17:45.175 fused_ordering(645) 00:17:45.175 fused_ordering(646) 00:17:45.175 fused_ordering(647) 00:17:45.175 fused_ordering(648) 00:17:45.175 fused_ordering(649) 00:17:45.175 fused_ordering(650) 00:17:45.175 fused_ordering(651) 00:17:45.175 fused_ordering(652) 00:17:45.175 fused_ordering(653) 00:17:45.175 fused_ordering(654) 00:17:45.175 fused_ordering(655) 00:17:45.175 fused_ordering(656) 00:17:45.175 fused_ordering(657) 00:17:45.175 fused_ordering(658) 00:17:45.175 fused_ordering(659) 00:17:45.175 fused_ordering(660) 00:17:45.175 fused_ordering(661) 00:17:45.175 fused_ordering(662) 00:17:45.175 fused_ordering(663) 00:17:45.175 fused_ordering(664) 00:17:45.175 fused_ordering(665) 00:17:45.175 fused_ordering(666) 00:17:45.175 fused_ordering(667) 00:17:45.175 fused_ordering(668) 00:17:45.175 fused_ordering(669) 00:17:45.175 fused_ordering(670) 00:17:45.175 fused_ordering(671) 00:17:45.175 fused_ordering(672) 00:17:45.175 fused_ordering(673) 00:17:45.175 fused_ordering(674) 00:17:45.175 fused_ordering(675) 00:17:45.175 fused_ordering(676) 00:17:45.175 fused_ordering(677) 00:17:45.175 fused_ordering(678) 00:17:45.175 fused_ordering(679) 00:17:45.175 fused_ordering(680) 00:17:45.175 fused_ordering(681) 00:17:45.175 fused_ordering(682) 00:17:45.175 fused_ordering(683) 00:17:45.175 fused_ordering(684) 00:17:45.175 fused_ordering(685) 00:17:45.175 fused_ordering(686) 00:17:45.175 fused_ordering(687) 00:17:45.175 fused_ordering(688) 00:17:45.175 fused_ordering(689) 00:17:45.175 fused_ordering(690) 00:17:45.175 fused_ordering(691) 00:17:45.175 fused_ordering(692) 00:17:45.175 fused_ordering(693) 00:17:45.175 fused_ordering(694) 00:17:45.175 fused_ordering(695) 00:17:45.175 fused_ordering(696) 00:17:45.175 fused_ordering(697) 00:17:45.175 fused_ordering(698) 00:17:45.175 fused_ordering(699) 00:17:45.175 fused_ordering(700) 00:17:45.175 fused_ordering(701) 00:17:45.175 fused_ordering(702) 00:17:45.175 fused_ordering(703) 00:17:45.175 fused_ordering(704) 00:17:45.175 fused_ordering(705) 00:17:45.175 fused_ordering(706) 00:17:45.175 fused_ordering(707) 00:17:45.175 fused_ordering(708) 00:17:45.175 fused_ordering(709) 00:17:45.175 fused_ordering(710) 00:17:45.175 fused_ordering(711) 00:17:45.175 fused_ordering(712) 00:17:45.175 fused_ordering(713) 00:17:45.175 fused_ordering(714) 00:17:45.175 fused_ordering(715) 00:17:45.175 fused_ordering(716) 00:17:45.175 fused_ordering(717) 00:17:45.175 fused_ordering(718) 00:17:45.176 fused_ordering(719) 00:17:45.176 fused_ordering(720) 00:17:45.176 fused_ordering(721) 00:17:45.176 fused_ordering(722) 00:17:45.176 fused_ordering(723) 00:17:45.176 fused_ordering(724) 00:17:45.176 fused_ordering(725) 00:17:45.176 fused_ordering(726) 00:17:45.176 fused_ordering(727) 00:17:45.176 fused_ordering(728) 00:17:45.176 fused_ordering(729) 00:17:45.176 fused_ordering(730) 00:17:45.176 fused_ordering(731) 00:17:45.176 fused_ordering(732) 00:17:45.176 fused_ordering(733) 00:17:45.176 fused_ordering(734) 00:17:45.176 fused_ordering(735) 00:17:45.176 fused_ordering(736) 00:17:45.176 fused_ordering(737) 00:17:45.176 fused_ordering(738) 00:17:45.176 fused_ordering(739) 00:17:45.176 fused_ordering(740) 00:17:45.176 fused_ordering(741) 00:17:45.176 fused_ordering(742) 00:17:45.176 fused_ordering(743) 00:17:45.176 fused_ordering(744) 00:17:45.176 fused_ordering(745) 00:17:45.176 fused_ordering(746) 00:17:45.176 fused_ordering(747) 00:17:45.176 fused_ordering(748) 00:17:45.176 fused_ordering(749) 00:17:45.176 fused_ordering(750) 00:17:45.176 fused_ordering(751) 00:17:45.176 fused_ordering(752) 00:17:45.176 fused_ordering(753) 00:17:45.176 fused_ordering(754) 00:17:45.176 fused_ordering(755) 00:17:45.176 fused_ordering(756) 00:17:45.176 fused_ordering(757) 00:17:45.176 fused_ordering(758) 00:17:45.176 fused_ordering(759) 00:17:45.176 fused_ordering(760) 00:17:45.176 fused_ordering(761) 00:17:45.176 fused_ordering(762) 00:17:45.176 fused_ordering(763) 00:17:45.176 fused_ordering(764) 00:17:45.176 fused_ordering(765) 00:17:45.176 fused_ordering(766) 00:17:45.176 fused_ordering(767) 00:17:45.176 fused_ordering(768) 00:17:45.176 fused_ordering(769) 00:17:45.176 fused_ordering(770) 00:17:45.176 fused_ordering(771) 00:17:45.176 fused_ordering(772) 00:17:45.176 fused_ordering(773) 00:17:45.176 fused_ordering(774) 00:17:45.176 fused_ordering(775) 00:17:45.176 fused_ordering(776) 00:17:45.176 fused_ordering(777) 00:17:45.176 fused_ordering(778) 00:17:45.176 fused_ordering(779) 00:17:45.176 fused_ordering(780) 00:17:45.176 fused_ordering(781) 00:17:45.176 fused_ordering(782) 00:17:45.176 fused_ordering(783) 00:17:45.176 fused_ordering(784) 00:17:45.176 fused_ordering(785) 00:17:45.176 fused_ordering(786) 00:17:45.176 fused_ordering(787) 00:17:45.176 fused_ordering(788) 00:17:45.176 fused_ordering(789) 00:17:45.176 fused_ordering(790) 00:17:45.176 fused_ordering(791) 00:17:45.176 fused_ordering(792) 00:17:45.176 fused_ordering(793) 00:17:45.176 fused_ordering(794) 00:17:45.176 fused_ordering(795) 00:17:45.176 fused_ordering(796) 00:17:45.176 fused_ordering(797) 00:17:45.176 fused_ordering(798) 00:17:45.176 fused_ordering(799) 00:17:45.176 fused_ordering(800) 00:17:45.176 fused_ordering(801) 00:17:45.176 fused_ordering(802) 00:17:45.176 fused_ordering(803) 00:17:45.176 fused_ordering(804) 00:17:45.176 fused_ordering(805) 00:17:45.176 fused_ordering(806) 00:17:45.176 fused_ordering(807) 00:17:45.176 fused_ordering(808) 00:17:45.176 fused_ordering(809) 00:17:45.176 fused_ordering(810) 00:17:45.176 fused_ordering(811) 00:17:45.176 fused_ordering(812) 00:17:45.176 fused_ordering(813) 00:17:45.176 fused_ordering(814) 00:17:45.176 fused_ordering(815) 00:17:45.176 fused_ordering(816) 00:17:45.176 fused_ordering(817) 00:17:45.176 fused_ordering(818) 00:17:45.176 fused_ordering(819) 00:17:45.176 fused_ordering(820) 00:17:45.743 fused_ordering(821) 00:17:45.743 fused_ordering(822) 00:17:45.743 fused_ordering(823) 00:17:45.743 fused_ordering(824) 00:17:45.743 fused_ordering(825) 00:17:45.743 fused_ordering(826) 00:17:45.743 fused_ordering(827) 00:17:45.743 fused_ordering(828) 00:17:45.743 fused_ordering(829) 00:17:45.743 fused_ordering(830) 00:17:45.743 fused_ordering(831) 00:17:45.743 fused_ordering(832) 00:17:45.743 fused_ordering(833) 00:17:45.743 fused_ordering(834) 00:17:45.743 fused_ordering(835) 00:17:45.743 fused_ordering(836) 00:17:45.743 fused_ordering(837) 00:17:45.743 fused_ordering(838) 00:17:45.743 fused_ordering(839) 00:17:45.743 fused_ordering(840) 00:17:45.743 fused_ordering(841) 00:17:45.743 fused_ordering(842) 00:17:45.743 fused_ordering(843) 00:17:45.743 fused_ordering(844) 00:17:45.743 fused_ordering(845) 00:17:45.743 fused_ordering(846) 00:17:45.743 fused_ordering(847) 00:17:45.743 fused_ordering(848) 00:17:45.743 fused_ordering(849) 00:17:45.743 fused_ordering(850) 00:17:45.743 fused_ordering(851) 00:17:45.743 fused_ordering(852) 00:17:45.743 fused_ordering(853) 00:17:45.743 fused_ordering(854) 00:17:45.743 fused_ordering(855) 00:17:45.743 fused_ordering(856) 00:17:45.743 fused_ordering(857) 00:17:45.743 fused_ordering(858) 00:17:45.743 fused_ordering(859) 00:17:45.743 fused_ordering(860) 00:17:45.743 fused_ordering(861) 00:17:45.743 fused_ordering(862) 00:17:45.743 fused_ordering(863) 00:17:45.743 fused_ordering(864) 00:17:45.743 fused_ordering(865) 00:17:45.743 fused_ordering(866) 00:17:45.743 fused_ordering(867) 00:17:45.743 fused_ordering(868) 00:17:45.743 fused_ordering(869) 00:17:45.743 fused_ordering(870) 00:17:45.743 fused_ordering(871) 00:17:45.743 fused_ordering(872) 00:17:45.743 fused_ordering(873) 00:17:45.743 fused_ordering(874) 00:17:45.743 fused_ordering(875) 00:17:45.743 fused_ordering(876) 00:17:45.743 fused_ordering(877) 00:17:45.743 fused_ordering(878) 00:17:45.743 fused_ordering(879) 00:17:45.743 fused_ordering(880) 00:17:45.743 fused_ordering(881) 00:17:45.743 fused_ordering(882) 00:17:45.743 fused_ordering(883) 00:17:45.743 fused_ordering(884) 00:17:45.743 fused_ordering(885) 00:17:45.743 fused_ordering(886) 00:17:45.743 fused_ordering(887) 00:17:45.743 fused_ordering(888) 00:17:45.743 fused_ordering(889) 00:17:45.743 fused_ordering(890) 00:17:45.743 fused_ordering(891) 00:17:45.743 fused_ordering(892) 00:17:45.743 fused_ordering(893) 00:17:45.743 fused_ordering(894) 00:17:45.743 fused_ordering(895) 00:17:45.743 fused_ordering(896) 00:17:45.743 fused_ordering(897) 00:17:45.743 fused_ordering(898) 00:17:45.743 fused_ordering(899) 00:17:45.743 fused_ordering(900) 00:17:45.743 fused_ordering(901) 00:17:45.743 fused_ordering(902) 00:17:45.743 fused_ordering(903) 00:17:45.743 fused_ordering(904) 00:17:45.743 fused_ordering(905) 00:17:45.743 fused_ordering(906) 00:17:45.743 fused_ordering(907) 00:17:45.743 fused_ordering(908) 00:17:45.743 fused_ordering(909) 00:17:45.743 fused_ordering(910) 00:17:45.743 fused_ordering(911) 00:17:45.743 fused_ordering(912) 00:17:45.743 fused_ordering(913) 00:17:45.743 fused_ordering(914) 00:17:45.743 fused_ordering(915) 00:17:45.743 fused_ordering(916) 00:17:45.743 fused_ordering(917) 00:17:45.743 fused_ordering(918) 00:17:45.743 fused_ordering(919) 00:17:45.743 fused_ordering(920) 00:17:45.743 fused_ordering(921) 00:17:45.743 fused_ordering(922) 00:17:45.743 fused_ordering(923) 00:17:45.743 fused_ordering(924) 00:17:45.743 fused_ordering(925) 00:17:45.743 fused_ordering(926) 00:17:45.743 fused_ordering(927) 00:17:45.743 fused_ordering(928) 00:17:45.743 fused_ordering(929) 00:17:45.743 fused_ordering(930) 00:17:45.743 fused_ordering(931) 00:17:45.743 fused_ordering(932) 00:17:45.743 fused_ordering(933) 00:17:45.743 fused_ordering(934) 00:17:45.743 fused_ordering(935) 00:17:45.743 fused_ordering(936) 00:17:45.743 fused_ordering(937) 00:17:45.743 fused_ordering(938) 00:17:45.743 fused_ordering(939) 00:17:45.743 fused_ordering(940) 00:17:45.743 fused_ordering(941) 00:17:45.743 fused_ordering(942) 00:17:45.743 fused_ordering(943) 00:17:45.743 fused_ordering(944) 00:17:45.743 fused_ordering(945) 00:17:45.743 fused_ordering(946) 00:17:45.743 fused_ordering(947) 00:17:45.743 fused_ordering(948) 00:17:45.743 fused_ordering(949) 00:17:45.743 fused_ordering(950) 00:17:45.743 fused_ordering(951) 00:17:45.743 fused_ordering(952) 00:17:45.743 fused_ordering(953) 00:17:45.743 fused_ordering(954) 00:17:45.743 fused_ordering(955) 00:17:45.743 fused_ordering(956) 00:17:45.743 fused_ordering(957) 00:17:45.743 fused_ordering(958) 00:17:45.743 fused_ordering(959) 00:17:45.743 fused_ordering(960) 00:17:45.743 fused_ordering(961) 00:17:45.743 fused_ordering(962) 00:17:45.743 fused_ordering(963) 00:17:45.743 fused_ordering(964) 00:17:45.743 fused_ordering(965) 00:17:45.743 fused_ordering(966) 00:17:45.743 fused_ordering(967) 00:17:45.743 fused_ordering(968) 00:17:45.743 fused_ordering(969) 00:17:45.743 fused_ordering(970) 00:17:45.743 fused_ordering(971) 00:17:45.743 fused_ordering(972) 00:17:45.743 fused_ordering(973) 00:17:45.743 fused_ordering(974) 00:17:45.743 fused_ordering(975) 00:17:45.743 fused_ordering(976) 00:17:45.743 fused_ordering(977) 00:17:45.743 fused_ordering(978) 00:17:45.743 fused_ordering(979) 00:17:45.743 fused_ordering(980) 00:17:45.743 fused_ordering(981) 00:17:45.743 fused_ordering(982) 00:17:45.743 fused_ordering(983) 00:17:45.743 fused_ordering(984) 00:17:45.743 fused_ordering(985) 00:17:45.743 fused_ordering(986) 00:17:45.743 fused_ordering(987) 00:17:45.743 fused_ordering(988) 00:17:45.743 fused_ordering(989) 00:17:45.743 fused_ordering(990) 00:17:45.743 fused_ordering(991) 00:17:45.743 fused_ordering(992) 00:17:45.744 fused_ordering(993) 00:17:45.744 fused_ordering(994) 00:17:45.744 fused_ordering(995) 00:17:45.744 fused_ordering(996) 00:17:45.744 fused_ordering(997) 00:17:45.744 fused_ordering(998) 00:17:45.744 fused_ordering(999) 00:17:45.744 fused_ordering(1000) 00:17:45.744 fused_ordering(1001) 00:17:45.744 fused_ordering(1002) 00:17:45.744 fused_ordering(1003) 00:17:45.744 fused_ordering(1004) 00:17:45.744 fused_ordering(1005) 00:17:45.744 fused_ordering(1006) 00:17:45.744 fused_ordering(1007) 00:17:45.744 fused_ordering(1008) 00:17:45.744 fused_ordering(1009) 00:17:45.744 fused_ordering(1010) 00:17:45.744 fused_ordering(1011) 00:17:45.744 fused_ordering(1012) 00:17:45.744 fused_ordering(1013) 00:17:45.744 fused_ordering(1014) 00:17:45.744 fused_ordering(1015) 00:17:45.744 fused_ordering(1016) 00:17:45.744 fused_ordering(1017) 00:17:45.744 fused_ordering(1018) 00:17:45.744 fused_ordering(1019) 00:17:45.744 fused_ordering(1020) 00:17:45.744 fused_ordering(1021) 00:17:45.744 fused_ordering(1022) 00:17:45.744 fused_ordering(1023) 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.744 rmmod nvme_tcp 00:17:45.744 rmmod nvme_fabrics 00:17:45.744 rmmod nvme_keyring 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 952579 ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 952579 ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952579' 00:17:45.744 killing process with pid 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 952579 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.744 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.003 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:47.910 00:17:47.910 real 0m10.552s 00:17:47.910 user 0m4.861s 00:17:47.910 sys 0m5.779s 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:47.910 ************************************ 00:17:47.910 END TEST nvmf_fused_ordering 00:17:47.910 ************************************ 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.910 06:08:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.910 ************************************ 00:17:47.910 START TEST nvmf_ns_masking 00:17:47.910 ************************************ 00:17:47.910 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:48.170 * Looking for test storage... 00:17:48.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:48.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.170 --rc genhtml_branch_coverage=1 00:17:48.170 --rc genhtml_function_coverage=1 00:17:48.170 --rc genhtml_legend=1 00:17:48.170 --rc geninfo_all_blocks=1 00:17:48.170 --rc geninfo_unexecuted_blocks=1 00:17:48.170 00:17:48.170 ' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:48.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.170 --rc genhtml_branch_coverage=1 00:17:48.170 --rc genhtml_function_coverage=1 00:17:48.170 --rc genhtml_legend=1 00:17:48.170 --rc geninfo_all_blocks=1 00:17:48.170 --rc geninfo_unexecuted_blocks=1 00:17:48.170 00:17:48.170 ' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:48.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.170 --rc genhtml_branch_coverage=1 00:17:48.170 --rc genhtml_function_coverage=1 00:17:48.170 --rc genhtml_legend=1 00:17:48.170 --rc geninfo_all_blocks=1 00:17:48.170 --rc geninfo_unexecuted_blocks=1 00:17:48.170 00:17:48.170 ' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:48.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.170 --rc genhtml_branch_coverage=1 00:17:48.170 --rc genhtml_function_coverage=1 00:17:48.170 --rc genhtml_legend=1 00:17:48.170 --rc geninfo_all_blocks=1 00:17:48.170 --rc geninfo_unexecuted_blocks=1 00:17:48.170 00:17:48.170 ' 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.170 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=034931bc-9b16-4982-ad31-098f1831eba3 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a07de5e3-84ee-4cbb-91d6-0ba92b12ce12 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1d7bb6f7-99be-4485-a393-c0d5fa23b9e4 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:48.171 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:54.741 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:54.741 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.741 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:54.742 Found net devices under 0000:af:00.0: cvl_0_0 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:54.742 Found net devices under 0000:af:00.1: cvl_0_1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.742 06:08:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:17:54.742 00:17:54.742 --- 10.0.0.2 ping statistics --- 00:17:54.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.742 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:17:54.742 00:17:54.742 --- 10.0.0.1 ping statistics --- 00:17:54.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.742 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=956503 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 956503 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 956503 ']' 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 [2024-12-15 06:08:14.185603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:54.742 [2024-12-15 06:08:14.185648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.742 [2024-12-15 06:08:14.261480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.742 [2024-12-15 06:08:14.282515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.742 [2024-12-15 06:08:14.282549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.742 [2024-12-15 06:08:14.282556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.742 [2024-12-15 06:08:14.282562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.742 [2024-12-15 06:08:14.282567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.742 [2024-12-15 06:08:14.283056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.742 [2024-12-15 06:08:14.585392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:54.742 Malloc1 00:17:54.742 06:08:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:55.001 Malloc2 00:17:55.001 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:55.259 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:55.259 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.518 [2024-12-15 06:08:15.559477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.518 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:55.518 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d7bb6f7-99be-4485-a393-c0d5fa23b9e4 -a 10.0.0.2 -s 4420 -i 4 00:17:55.776 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:55.776 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:55.776 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.776 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:55.776 06:08:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.680 [ 0]:0x1 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7981072322c4c1aa5d3f21054ecd8bc 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7981072322c4c1aa5d3f21054ecd8bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.680 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:57.939 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:57.939 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.939 06:08:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.939 [ 0]:0x1 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7981072322c4c1aa5d3f21054ecd8bc 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7981072322c4c1aa5d3f21054ecd8bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.939 [ 1]:0x2 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.939 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.198 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:17:58.198 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.198 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:58.198 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.198 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.457 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:58.457 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:58.457 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d7bb6f7-99be-4485-a393-c0d5fa23b9e4 -a 10.0.0.2 -s 4420 -i 4 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:58.715 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:00.618 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:00.877 [ 0]:0x2 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.877 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.136 [ 0]:0x1 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7981072322c4c1aa5d3f21054ecd8bc 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7981072322c4c1aa5d3f21054ecd8bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.136 [ 1]:0x2 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.136 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.395 [ 0]:0x2 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.395 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.654 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:01.654 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1d7bb6f7-99be-4485-a393-c0d5fa23b9e4 -a 10.0.0.2 -s 4420 -i 4 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:01.912 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:03.822 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:03.822 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:03.822 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:03.823 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:04.082 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.082 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:04.082 [ 0]:0x1 00:18:04.082 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.082 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7981072322c4c1aa5d3f21054ecd8bc 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7981072322c4c1aa5d3f21054ecd8bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:04.082 [ 1]:0x2 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.082 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.340 [ 0]:0x2 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:04.340 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:04.598 [2024-12-15 06:08:24.524973] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:04.598 request: 00:18:04.598 { 00:18:04.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.598 "nsid": 2, 00:18:04.598 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.598 "method": "nvmf_ns_remove_host", 00:18:04.598 "req_id": 1 00:18:04.598 } 00:18:04.598 Got JSON-RPC error response 00:18:04.598 response: 00:18:04.598 { 00:18:04.598 "code": -32602, 00:18:04.598 "message": "Invalid parameters" 00:18:04.598 } 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.598 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:04.599 [ 0]:0x2 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7a15aa47506744129cd782538c787785 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7a15aa47506744129cd782538c787785 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:04.599 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=958363 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 958363 /var/tmp/host.sock 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 958363 ']' 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:04.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.858 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.858 [2024-12-15 06:08:24.880268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:04.858 [2024-12-15 06:08:24.880312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958363 ] 00:18:04.858 [2024-12-15 06:08:24.957073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.858 [2024-12-15 06:08:24.980244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.117 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.117 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:05.117 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.374 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:05.631 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 034931bc-9b16-4982-ad31-098f1831eba3 00:18:05.631 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:05.631 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 034931BC9B164982AD31098F1831EBA3 -i 00:18:05.889 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a07de5e3-84ee-4cbb-91d6-0ba92b12ce12 00:18:05.889 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:05.890 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A07DE5E384EE4CBB91D60BA92B12CE12 -i 00:18:05.890 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.148 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:06.406 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:06.406 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:06.746 nvme0n1 00:18:06.746 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:06.747 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:07.065 nvme1n2 00:18:07.065 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:07.065 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:07.065 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:07.065 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:07.065 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:07.065 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:07.065 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:07.065 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:07.065 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:07.324 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 034931bc-9b16-4982-ad31-098f1831eba3 == \0\3\4\9\3\1\b\c\-\9\b\1\6\-\4\9\8\2\-\a\d\3\1\-\0\9\8\f\1\8\3\1\e\b\a\3 ]] 00:18:07.324 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:07.324 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:07.324 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:07.583 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a07de5e3-84ee-4cbb-91d6-0ba92b12ce12 == \a\0\7\d\e\5\e\3\-\8\4\e\e\-\4\c\b\b\-\9\1\d\6\-\0\b\a\9\2\b\1\2\c\e\1\2 ]] 00:18:07.583 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 034931bc-9b16-4982-ad31-098f1831eba3 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 034931BC9B164982AD31098F1831EBA3 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 034931BC9B164982AD31098F1831EBA3 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:07.842 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 034931BC9B164982AD31098F1831EBA3 00:18:08.101 [2024-12-15 06:08:28.118989] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:08.101 [2024-12-15 06:08:28.119020] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:08.101 [2024-12-15 06:08:28.119029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.101 request: 00:18:08.101 { 00:18:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.101 "namespace": { 00:18:08.101 "bdev_name": "invalid", 00:18:08.101 "nsid": 1, 00:18:08.101 "nguid": "034931BC9B164982AD31098F1831EBA3", 00:18:08.101 "no_auto_visible": false, 00:18:08.101 "hide_metadata": false 00:18:08.101 }, 00:18:08.101 "method": "nvmf_subsystem_add_ns", 00:18:08.101 "req_id": 1 00:18:08.101 } 00:18:08.101 Got JSON-RPC error response 00:18:08.101 response: 00:18:08.101 { 00:18:08.101 "code": -32602, 00:18:08.101 "message": "Invalid parameters" 00:18:08.101 } 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 034931bc-9b16-4982-ad31-098f1831eba3 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:08.101 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 034931BC9B164982AD31098F1831EBA3 -i 00:18:08.360 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:10.263 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:10.263 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:10.263 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 958363 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 958363 ']' 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 958363 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958363 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958363' 00:18:10.523 killing process with pid 958363 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 958363 00:18:10.523 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 958363 00:18:10.781 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:11.040 rmmod nvme_tcp 00:18:11.040 rmmod nvme_fabrics 00:18:11.040 rmmod nvme_keyring 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 956503 ']' 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 956503 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 956503 ']' 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 956503 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.040 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956503 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956503' 00:18:11.299 killing process with pid 956503 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 956503 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 956503 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.299 06:08:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.836 00:18:13.836 real 0m25.450s 00:18:13.836 user 0m30.438s 00:18:13.836 sys 0m6.839s 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.836 ************************************ 00:18:13.836 END TEST nvmf_ns_masking 00:18:13.836 ************************************ 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.836 ************************************ 00:18:13.836 START TEST nvmf_nvme_cli 00:18:13.836 ************************************ 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:13.836 * Looking for test storage... 00:18:13.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.836 --rc genhtml_branch_coverage=1 00:18:13.836 --rc genhtml_function_coverage=1 00:18:13.836 --rc genhtml_legend=1 00:18:13.836 --rc geninfo_all_blocks=1 00:18:13.836 --rc geninfo_unexecuted_blocks=1 00:18:13.836 00:18:13.836 ' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.836 --rc genhtml_branch_coverage=1 00:18:13.836 --rc genhtml_function_coverage=1 00:18:13.836 --rc genhtml_legend=1 00:18:13.836 --rc geninfo_all_blocks=1 00:18:13.836 --rc geninfo_unexecuted_blocks=1 00:18:13.836 00:18:13.836 ' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.836 --rc genhtml_branch_coverage=1 00:18:13.836 --rc genhtml_function_coverage=1 00:18:13.836 --rc genhtml_legend=1 00:18:13.836 --rc geninfo_all_blocks=1 00:18:13.836 --rc geninfo_unexecuted_blocks=1 00:18:13.836 00:18:13.836 ' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.836 --rc genhtml_branch_coverage=1 00:18:13.836 --rc genhtml_function_coverage=1 00:18:13.836 --rc genhtml_legend=1 00:18:13.836 --rc geninfo_all_blocks=1 00:18:13.836 --rc geninfo_unexecuted_blocks=1 00:18:13.836 00:18:13.836 ' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.836 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.837 06:08:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:19.107 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:19.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:19.107 Found net devices under 0000:af:00.0: cvl_0_0 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:19.107 Found net devices under 0000:af:00.1: cvl_0_1 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.107 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:18:19.367 00:18:19.367 --- 10.0.0.2 ping statistics --- 00:18:19.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.367 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:19.367 00:18:19.367 --- 10.0.0.1 ping statistics --- 00:18:19.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.367 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=962862 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 962862 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 962862 ']' 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.367 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.626 [2024-12-15 06:08:39.538874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:19.626 [2024-12-15 06:08:39.538915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.626 [2024-12-15 06:08:39.615521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.626 [2024-12-15 06:08:39.640195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.626 [2024-12-15 06:08:39.640233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.626 [2024-12-15 06:08:39.640240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.626 [2024-12-15 06:08:39.640247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.626 [2024-12-15 06:08:39.640252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.626 [2024-12-15 06:08:39.641734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.626 [2024-12-15 06:08:39.641842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.626 [2024-12-15 06:08:39.641858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.626 [2024-12-15 06:08:39.641863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.626 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.626 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:19.626 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.626 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.626 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 [2024-12-15 06:08:39.786287] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 Malloc0 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 Malloc1 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 [2024-12-15 06:08:39.887115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.886 06:08:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:20.145 00:18:20.145 Discovery Log Number of Records 2, Generation counter 2 00:18:20.145 =====Discovery Log Entry 0====== 00:18:20.145 trtype: tcp 00:18:20.145 adrfam: ipv4 00:18:20.145 subtype: current discovery subsystem 00:18:20.145 treq: not required 00:18:20.145 portid: 0 00:18:20.145 trsvcid: 4420 00:18:20.145 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:20.145 traddr: 10.0.0.2 00:18:20.145 eflags: explicit discovery connections, duplicate discovery information 00:18:20.145 sectype: none 00:18:20.146 =====Discovery Log Entry 1====== 00:18:20.146 trtype: tcp 00:18:20.146 adrfam: ipv4 00:18:20.146 subtype: nvme subsystem 00:18:20.146 treq: not required 00:18:20.146 portid: 0 00:18:20.146 trsvcid: 4420 00:18:20.146 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:20.146 traddr: 10.0.0.2 00:18:20.146 eflags: none 00:18:20.146 sectype: none 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:20.146 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:21.103 06:08:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:23.639 /dev/nvme0n2 ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:23.639 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.899 rmmod nvme_tcp 00:18:23.899 rmmod nvme_fabrics 00:18:23.899 rmmod nvme_keyring 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 962862 ']' 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 962862 00:18:23.899 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 962862 ']' 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 962862 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962862 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962862' 00:18:23.900 killing process with pid 962862 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 962862 00:18:23.900 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 962862 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.159 06:08:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:26.694 00:18:26.694 real 0m12.681s 00:18:26.694 user 0m19.562s 00:18:26.694 sys 0m4.957s 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:26.694 ************************************ 00:18:26.694 END TEST nvmf_nvme_cli 00:18:26.694 ************************************ 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.694 ************************************ 00:18:26.694 START TEST nvmf_vfio_user 00:18:26.694 ************************************ 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:26.694 * Looking for test storage... 00:18:26.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.694 --rc genhtml_branch_coverage=1 00:18:26.694 --rc genhtml_function_coverage=1 00:18:26.694 --rc genhtml_legend=1 00:18:26.694 --rc geninfo_all_blocks=1 00:18:26.694 --rc geninfo_unexecuted_blocks=1 00:18:26.694 00:18:26.694 ' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.694 --rc genhtml_branch_coverage=1 00:18:26.694 --rc genhtml_function_coverage=1 00:18:26.694 --rc genhtml_legend=1 00:18:26.694 --rc geninfo_all_blocks=1 00:18:26.694 --rc geninfo_unexecuted_blocks=1 00:18:26.694 00:18:26.694 ' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.694 --rc genhtml_branch_coverage=1 00:18:26.694 --rc genhtml_function_coverage=1 00:18:26.694 --rc genhtml_legend=1 00:18:26.694 --rc geninfo_all_blocks=1 00:18:26.694 --rc geninfo_unexecuted_blocks=1 00:18:26.694 00:18:26.694 ' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:26.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.694 --rc genhtml_branch_coverage=1 00:18:26.694 --rc genhtml_function_coverage=1 00:18:26.694 --rc genhtml_legend=1 00:18:26.694 --rc geninfo_all_blocks=1 00:18:26.694 --rc geninfo_unexecuted_blocks=1 00:18:26.694 00:18:26.694 ' 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:26.694 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=964123 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 964123' 00:18:26.695 Process pid: 964123 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 964123 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 964123 ']' 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:26.695 [2024-12-15 06:08:46.580299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:26.695 [2024-12-15 06:08:46.580350] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.695 [2024-12-15 06:08:46.653197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.695 [2024-12-15 06:08:46.675363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.695 [2024-12-15 06:08:46.675404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.695 [2024-12-15 06:08:46.675413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.695 [2024-12-15 06:08:46.675419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.695 [2024-12-15 06:08:46.675423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.695 [2024-12-15 06:08:46.676759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.695 [2024-12-15 06:08:46.676793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.695 [2024-12-15 06:08:46.676835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.695 [2024-12-15 06:08:46.676836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:26.695 06:08:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:28.075 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:28.334 Malloc1 00:18:28.334 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:28.334 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:28.593 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:28.853 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:28.853 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:28.853 06:08:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:29.112 Malloc2 00:18:29.112 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:29.371 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:29.371 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:29.632 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:29.633 [2024-12-15 06:08:49.651173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:29.633 [2024-12-15 06:08:49.651221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964592 ] 00:18:29.633 [2024-12-15 06:08:49.690437] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:29.633 [2024-12-15 06:08:49.695744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:29.633 [2024-12-15 06:08:49.695761] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5d13db2000 00:18:29.633 [2024-12-15 06:08:49.696745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.697746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.698752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.699756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.700766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.701770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.702780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.703780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.633 [2024-12-15 06:08:49.704791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:29.633 [2024-12-15 06:08:49.704802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5d12abc000 00:18:29.633 [2024-12-15 06:08:49.705717] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:29.633 [2024-12-15 06:08:49.719258] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:29.633 [2024-12-15 06:08:49.719282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:29.633 [2024-12-15 06:08:49.721903] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:29.633 [2024-12-15 06:08:49.721937] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:29.633 [2024-12-15 06:08:49.722017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:29.633 [2024-12-15 06:08:49.722034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:29.633 [2024-12-15 06:08:49.722040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:29.633 [2024-12-15 06:08:49.722895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:29.633 [2024-12-15 06:08:49.722904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:29.633 [2024-12-15 06:08:49.722911] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:29.633 [2024-12-15 06:08:49.723901] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:29.633 [2024-12-15 06:08:49.723910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:29.633 [2024-12-15 06:08:49.723916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.724904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:29.633 [2024-12-15 06:08:49.724913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.725914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:29.633 [2024-12-15 06:08:49.725922] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:29.633 [2024-12-15 06:08:49.725926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.725933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.726040] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:29.633 [2024-12-15 06:08:49.726045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.726053] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:29.633 [2024-12-15 06:08:49.726917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:29.633 [2024-12-15 06:08:49.727920] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:29.633 [2024-12-15 06:08:49.728926] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:29.633 [2024-12-15 06:08:49.729926] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:29.633 [2024-12-15 06:08:49.730017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:29.633 [2024-12-15 06:08:49.730935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:29.633 [2024-12-15 06:08:49.730943] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:29.633 [2024-12-15 06:08:49.730947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.730964] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:29.633 [2024-12-15 06:08:49.730971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.730984] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.633 [2024-12-15 06:08:49.730989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.633 [2024-12-15 06:08:49.730995] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.633 [2024-12-15 06:08:49.731009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.633 [2024-12-15 06:08:49.731060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:29.633 [2024-12-15 06:08:49.731069] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:29.633 [2024-12-15 06:08:49.731073] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:29.633 [2024-12-15 06:08:49.731077] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:29.633 [2024-12-15 06:08:49.731081] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:29.633 [2024-12-15 06:08:49.731086] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:29.633 [2024-12-15 06:08:49.731090] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:29.633 [2024-12-15 06:08:49.731094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.731103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.731114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:29.633 [2024-12-15 06:08:49.731129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:29.633 [2024-12-15 06:08:49.731139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.633 [2024-12-15 06:08:49.731146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.633 [2024-12-15 06:08:49.731154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.633 [2024-12-15 06:08:49.731161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.633 [2024-12-15 06:08:49.731166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.731174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:29.633 [2024-12-15 06:08:49.731182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:29.633 [2024-12-15 06:08:49.731192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:29.633 [2024-12-15 06:08:49.731198] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:29.634 [2024-12-15 06:08:49.731204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731300] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:29.634 [2024-12-15 06:08:49.731305] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:29.634 [2024-12-15 06:08:49.731310] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731338] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:29.634 [2024-12-15 06:08:49.731349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731363] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.634 [2024-12-15 06:08:49.731369] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.634 [2024-12-15 06:08:49.731374] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731428] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.634 [2024-12-15 06:08:49.731433] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.634 [2024-12-15 06:08:49.731437] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731497] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:29.634 [2024-12-15 06:08:49.731501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:29.634 [2024-12-15 06:08:49.731506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:29.634 [2024-12-15 06:08:49.731527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731613] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:29.634 [2024-12-15 06:08:49.731618] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:29.634 [2024-12-15 06:08:49.731621] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:29.634 [2024-12-15 06:08:49.731625] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:29.634 [2024-12-15 06:08:49.731630] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:29.634 [2024-12-15 06:08:49.731637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:29.634 [2024-12-15 06:08:49.731644] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:29.634 [2024-12-15 06:08:49.731648] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:29.634 [2024-12-15 06:08:49.731651] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731663] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:29.634 [2024-12-15 06:08:49.731667] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.634 [2024-12-15 06:08:49.731670] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731682] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:29.634 [2024-12-15 06:08:49.731687] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:29.634 [2024-12-15 06:08:49.731690] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.634 [2024-12-15 06:08:49.731695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:29.634 [2024-12-15 06:08:49.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:29.634 [2024-12-15 06:08:49.731731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:29.634 ===================================================== 00:18:29.634 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:29.634 ===================================================== 00:18:29.634 Controller Capabilities/Features 00:18:29.634 ================================ 00:18:29.634 Vendor ID: 4e58 00:18:29.634 Subsystem Vendor ID: 4e58 00:18:29.634 Serial Number: SPDK1 00:18:29.634 Model Number: SPDK bdev Controller 00:18:29.634 Firmware Version: 25.01 00:18:29.634 Recommended Arb Burst: 6 00:18:29.634 IEEE OUI Identifier: 8d 6b 50 00:18:29.634 Multi-path I/O 00:18:29.634 May have multiple subsystem ports: Yes 00:18:29.634 May have multiple controllers: Yes 00:18:29.634 Associated with SR-IOV VF: No 00:18:29.634 Max Data Transfer Size: 131072 00:18:29.634 Max Number of Namespaces: 32 00:18:29.634 Max Number of I/O Queues: 127 00:18:29.634 NVMe Specification Version (VS): 1.3 00:18:29.634 NVMe Specification Version (Identify): 1.3 00:18:29.634 Maximum Queue Entries: 256 00:18:29.634 Contiguous Queues Required: Yes 00:18:29.634 Arbitration Mechanisms Supported 00:18:29.635 Weighted Round Robin: Not Supported 00:18:29.635 Vendor Specific: Not Supported 00:18:29.635 Reset Timeout: 15000 ms 00:18:29.635 Doorbell Stride: 4 bytes 00:18:29.635 NVM Subsystem Reset: Not Supported 00:18:29.635 Command Sets Supported 00:18:29.635 NVM Command Set: Supported 00:18:29.635 Boot Partition: Not Supported 00:18:29.635 Memory Page Size Minimum: 4096 bytes 00:18:29.635 Memory Page Size Maximum: 4096 bytes 00:18:29.635 Persistent Memory Region: Not Supported 00:18:29.635 Optional Asynchronous Events Supported 00:18:29.635 Namespace Attribute Notices: Supported 00:18:29.635 Firmware Activation Notices: Not Supported 00:18:29.635 ANA Change Notices: Not Supported 00:18:29.635 PLE Aggregate Log Change Notices: Not Supported 00:18:29.635 LBA Status Info Alert Notices: Not Supported 00:18:29.635 EGE Aggregate Log Change Notices: Not Supported 00:18:29.635 Normal NVM Subsystem Shutdown event: Not Supported 00:18:29.635 Zone Descriptor Change Notices: Not Supported 00:18:29.635 Discovery Log Change Notices: Not Supported 00:18:29.635 Controller Attributes 00:18:29.635 128-bit Host Identifier: Supported 00:18:29.635 Non-Operational Permissive Mode: Not Supported 00:18:29.635 NVM Sets: Not Supported 00:18:29.635 Read Recovery Levels: Not Supported 00:18:29.635 Endurance Groups: Not Supported 00:18:29.635 Predictable Latency Mode: Not Supported 00:18:29.635 Traffic Based Keep ALive: Not Supported 00:18:29.635 Namespace Granularity: Not Supported 00:18:29.635 SQ Associations: Not Supported 00:18:29.635 UUID List: Not Supported 00:18:29.635 Multi-Domain Subsystem: Not Supported 00:18:29.635 Fixed Capacity Management: Not Supported 00:18:29.635 Variable Capacity Management: Not Supported 00:18:29.635 Delete Endurance Group: Not Supported 00:18:29.635 Delete NVM Set: Not Supported 00:18:29.635 Extended LBA Formats Supported: Not Supported 00:18:29.635 Flexible Data Placement Supported: Not Supported 00:18:29.635 00:18:29.635 Controller Memory Buffer Support 00:18:29.635 ================================ 00:18:29.635 Supported: No 00:18:29.635 00:18:29.635 Persistent Memory Region Support 00:18:29.635 ================================ 00:18:29.635 Supported: No 00:18:29.635 00:18:29.635 Admin Command Set Attributes 00:18:29.635 ============================ 00:18:29.635 Security Send/Receive: Not Supported 00:18:29.635 Format NVM: Not Supported 00:18:29.635 Firmware Activate/Download: Not Supported 00:18:29.635 Namespace Management: Not Supported 00:18:29.635 Device Self-Test: Not Supported 00:18:29.635 Directives: Not Supported 00:18:29.635 NVMe-MI: Not Supported 00:18:29.635 Virtualization Management: Not Supported 00:18:29.635 Doorbell Buffer Config: Not Supported 00:18:29.635 Get LBA Status Capability: Not Supported 00:18:29.635 Command & Feature Lockdown Capability: Not Supported 00:18:29.635 Abort Command Limit: 4 00:18:29.635 Async Event Request Limit: 4 00:18:29.635 Number of Firmware Slots: N/A 00:18:29.635 Firmware Slot 1 Read-Only: N/A 00:18:29.635 Firmware Activation Without Reset: N/A 00:18:29.635 Multiple Update Detection Support: N/A 00:18:29.635 Firmware Update Granularity: No Information Provided 00:18:29.635 Per-Namespace SMART Log: No 00:18:29.635 Asymmetric Namespace Access Log Page: Not Supported 00:18:29.635 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:29.635 Command Effects Log Page: Supported 00:18:29.635 Get Log Page Extended Data: Supported 00:18:29.635 Telemetry Log Pages: Not Supported 00:18:29.635 Persistent Event Log Pages: Not Supported 00:18:29.635 Supported Log Pages Log Page: May Support 00:18:29.635 Commands Supported & Effects Log Page: Not Supported 00:18:29.635 Feature Identifiers & Effects Log Page:May Support 00:18:29.635 NVMe-MI Commands & Effects Log Page: May Support 00:18:29.635 Data Area 4 for Telemetry Log: Not Supported 00:18:29.635 Error Log Page Entries Supported: 128 00:18:29.635 Keep Alive: Supported 00:18:29.635 Keep Alive Granularity: 10000 ms 00:18:29.635 00:18:29.635 NVM Command Set Attributes 00:18:29.635 ========================== 00:18:29.635 Submission Queue Entry Size 00:18:29.635 Max: 64 00:18:29.635 Min: 64 00:18:29.635 Completion Queue Entry Size 00:18:29.635 Max: 16 00:18:29.635 Min: 16 00:18:29.635 Number of Namespaces: 32 00:18:29.635 Compare Command: Supported 00:18:29.635 Write Uncorrectable Command: Not Supported 00:18:29.635 Dataset Management Command: Supported 00:18:29.635 Write Zeroes Command: Supported 00:18:29.635 Set Features Save Field: Not Supported 00:18:29.635 Reservations: Not Supported 00:18:29.635 Timestamp: Not Supported 00:18:29.635 Copy: Supported 00:18:29.635 Volatile Write Cache: Present 00:18:29.635 Atomic Write Unit (Normal): 1 00:18:29.635 Atomic Write Unit (PFail): 1 00:18:29.635 Atomic Compare & Write Unit: 1 00:18:29.635 Fused Compare & Write: Supported 00:18:29.635 Scatter-Gather List 00:18:29.635 SGL Command Set: Supported (Dword aligned) 00:18:29.635 SGL Keyed: Not Supported 00:18:29.635 SGL Bit Bucket Descriptor: Not Supported 00:18:29.635 SGL Metadata Pointer: Not Supported 00:18:29.635 Oversized SGL: Not Supported 00:18:29.635 SGL Metadata Address: Not Supported 00:18:29.635 SGL Offset: Not Supported 00:18:29.635 Transport SGL Data Block: Not Supported 00:18:29.635 Replay Protected Memory Block: Not Supported 00:18:29.635 00:18:29.635 Firmware Slot Information 00:18:29.635 ========================= 00:18:29.635 Active slot: 1 00:18:29.635 Slot 1 Firmware Revision: 25.01 00:18:29.635 00:18:29.635 00:18:29.635 Commands Supported and Effects 00:18:29.635 ============================== 00:18:29.635 Admin Commands 00:18:29.635 -------------- 00:18:29.635 Get Log Page (02h): Supported 00:18:29.635 Identify (06h): Supported 00:18:29.635 Abort (08h): Supported 00:18:29.635 Set Features (09h): Supported 00:18:29.635 Get Features (0Ah): Supported 00:18:29.635 Asynchronous Event Request (0Ch): Supported 00:18:29.635 Keep Alive (18h): Supported 00:18:29.635 I/O Commands 00:18:29.635 ------------ 00:18:29.635 Flush (00h): Supported LBA-Change 00:18:29.635 Write (01h): Supported LBA-Change 00:18:29.635 Read (02h): Supported 00:18:29.635 Compare (05h): Supported 00:18:29.635 Write Zeroes (08h): Supported LBA-Change 00:18:29.635 Dataset Management (09h): Supported LBA-Change 00:18:29.635 Copy (19h): Supported LBA-Change 00:18:29.635 00:18:29.635 Error Log 00:18:29.635 ========= 00:18:29.635 00:18:29.635 Arbitration 00:18:29.635 =========== 00:18:29.635 Arbitration Burst: 1 00:18:29.635 00:18:29.635 Power Management 00:18:29.635 ================ 00:18:29.635 Number of Power States: 1 00:18:29.635 Current Power State: Power State #0 00:18:29.635 Power State #0: 00:18:29.635 Max Power: 0.00 W 00:18:29.635 Non-Operational State: Operational 00:18:29.635 Entry Latency: Not Reported 00:18:29.635 Exit Latency: Not Reported 00:18:29.635 Relative Read Throughput: 0 00:18:29.635 Relative Read Latency: 0 00:18:29.635 Relative Write Throughput: 0 00:18:29.635 Relative Write Latency: 0 00:18:29.635 Idle Power: Not Reported 00:18:29.635 Active Power: Not Reported 00:18:29.635 Non-Operational Permissive Mode: Not Supported 00:18:29.635 00:18:29.635 Health Information 00:18:29.635 ================== 00:18:29.635 Critical Warnings: 00:18:29.635 Available Spare Space: OK 00:18:29.635 Temperature: OK 00:18:29.635 Device Reliability: OK 00:18:29.635 Read Only: No 00:18:29.635 Volatile Memory Backup: OK 00:18:29.635 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:29.635 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:29.635 Available Spare: 0% 00:18:29.636 Available Sp[2024-12-15 06:08:49.731820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:29.636 [2024-12-15 06:08:49.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:29.636 [2024-12-15 06:08:49.731855] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:29.636 [2024-12-15 06:08:49.731866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.636 [2024-12-15 06:08:49.731872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.636 [2024-12-15 06:08:49.731878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.636 [2024-12-15 06:08:49.731883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.636 [2024-12-15 06:08:49.734000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:29.636 [2024-12-15 06:08:49.734011] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:29.636 [2024-12-15 06:08:49.734959] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:29.636 [2024-12-15 06:08:49.735012] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:29.636 [2024-12-15 06:08:49.735018] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:29.636 [2024-12-15 06:08:49.735965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:29.636 [2024-12-15 06:08:49.735975] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:29.636 [2024-12-15 06:08:49.736031] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:29.636 [2024-12-15 06:08:49.736990] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:29.894 are Threshold: 0% 00:18:29.894 Life Percentage Used: 0% 00:18:29.894 Data Units Read: 0 00:18:29.894 Data Units Written: 0 00:18:29.894 Host Read Commands: 0 00:18:29.894 Host Write Commands: 0 00:18:29.894 Controller Busy Time: 0 minutes 00:18:29.894 Power Cycles: 0 00:18:29.894 Power On Hours: 0 hours 00:18:29.894 Unsafe Shutdowns: 0 00:18:29.894 Unrecoverable Media Errors: 0 00:18:29.894 Lifetime Error Log Entries: 0 00:18:29.894 Warning Temperature Time: 0 minutes 00:18:29.894 Critical Temperature Time: 0 minutes 00:18:29.894 00:18:29.894 Number of Queues 00:18:29.894 ================ 00:18:29.894 Number of I/O Submission Queues: 127 00:18:29.894 Number of I/O Completion Queues: 127 00:18:29.894 00:18:29.894 Active Namespaces 00:18:29.894 ================= 00:18:29.894 Namespace ID:1 00:18:29.894 Error Recovery Timeout: Unlimited 00:18:29.894 Command Set Identifier: NVM (00h) 00:18:29.894 Deallocate: Supported 00:18:29.894 Deallocated/Unwritten Error: Not Supported 00:18:29.894 Deallocated Read Value: Unknown 00:18:29.894 Deallocate in Write Zeroes: Not Supported 00:18:29.894 Deallocated Guard Field: 0xFFFF 00:18:29.894 Flush: Supported 00:18:29.894 Reservation: Supported 00:18:29.894 Namespace Sharing Capabilities: Multiple Controllers 00:18:29.894 Size (in LBAs): 131072 (0GiB) 00:18:29.894 Capacity (in LBAs): 131072 (0GiB) 00:18:29.894 Utilization (in LBAs): 131072 (0GiB) 00:18:29.894 NGUID: 2BDAF1B0F0004B7E98E075429349F85F 00:18:29.894 UUID: 2bdaf1b0-f000-4b7e-98e0-75429349f85f 00:18:29.894 Thin Provisioning: Not Supported 00:18:29.894 Per-NS Atomic Units: Yes 00:18:29.894 Atomic Boundary Size (Normal): 0 00:18:29.894 Atomic Boundary Size (PFail): 0 00:18:29.894 Atomic Boundary Offset: 0 00:18:29.894 Maximum Single Source Range Length: 65535 00:18:29.894 Maximum Copy Length: 65535 00:18:29.894 Maximum Source Range Count: 1 00:18:29.894 NGUID/EUI64 Never Reused: No 00:18:29.894 Namespace Write Protected: No 00:18:29.894 Number of LBA Formats: 1 00:18:29.894 Current LBA Format: LBA Format #00 00:18:29.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:29.894 00:18:29.894 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:29.894 [2024-12-15 06:08:49.967962] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:35.172 Initializing NVMe Controllers 00:18:35.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:35.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:35.172 Initialization complete. Launching workers. 00:18:35.172 ======================================================== 00:18:35.172 Latency(us) 00:18:35.172 Device Information : IOPS MiB/s Average min max 00:18:35.172 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39911.20 155.90 3207.18 977.77 7606.10 00:18:35.172 ======================================================== 00:18:35.172 Total : 39911.20 155.90 3207.18 977.77 7606.10 00:18:35.172 00:18:35.172 [2024-12-15 06:08:54.992201] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.172 06:08:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:35.172 [2024-12-15 06:08:55.222289] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:40.447 Initializing NVMe Controllers 00:18:40.447 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:40.447 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:40.447 Initialization complete. Launching workers. 00:18:40.447 ======================================================== 00:18:40.447 Latency(us) 00:18:40.447 Device Information : IOPS MiB/s Average min max 00:18:40.447 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16063.71 62.75 7973.66 5982.29 8995.67 00:18:40.447 ======================================================== 00:18:40.447 Total : 16063.71 62.75 7973.66 5982.29 8995.67 00:18:40.447 00:18:40.447 [2024-12-15 06:09:00.260943] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:40.447 06:09:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:40.447 [2024-12-15 06:09:00.470930] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.721 [2024-12-15 06:09:05.526226] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.721 Initializing NVMe Controllers 00:18:45.721 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:45.721 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:45.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:45.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:45.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:45.721 Initialization complete. Launching workers. 00:18:45.721 Starting thread on core 2 00:18:45.721 Starting thread on core 3 00:18:45.721 Starting thread on core 1 00:18:45.721 06:09:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:45.722 [2024-12-15 06:09:05.829389] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:49.013 [2024-12-15 06:09:08.899769] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:49.013 Initializing NVMe Controllers 00:18:49.013 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.013 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.013 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:49.013 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:49.013 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:49.013 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:49.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:49.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:49.013 Initialization complete. Launching workers. 00:18:49.013 Starting thread on core 1 with urgent priority queue 00:18:49.013 Starting thread on core 2 with urgent priority queue 00:18:49.013 Starting thread on core 3 with urgent priority queue 00:18:49.013 Starting thread on core 0 with urgent priority queue 00:18:49.013 SPDK bdev Controller (SPDK1 ) core 0: 7052.67 IO/s 14.18 secs/100000 ios 00:18:49.013 SPDK bdev Controller (SPDK1 ) core 1: 7400.00 IO/s 13.51 secs/100000 ios 00:18:49.013 SPDK bdev Controller (SPDK1 ) core 2: 7402.00 IO/s 13.51 secs/100000 ios 00:18:49.013 SPDK bdev Controller (SPDK1 ) core 3: 8752.00 IO/s 11.43 secs/100000 ios 00:18:49.013 ======================================================== 00:18:49.013 00:18:49.013 06:09:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:49.272 [2024-12-15 06:09:09.182045] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:49.272 Initializing NVMe Controllers 00:18:49.272 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.272 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:49.272 Namespace ID: 1 size: 0GB 00:18:49.272 Initialization complete. 00:18:49.272 INFO: using host memory buffer for IO 00:18:49.272 Hello world! 00:18:49.272 [2024-12-15 06:09:09.217246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:49.273 06:09:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:49.532 [2024-12-15 06:09:09.498375] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.470 Initializing NVMe Controllers 00:18:50.470 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.470 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:50.470 Initialization complete. Launching workers. 00:18:50.470 submit (in ns) avg, min, max = 6794.4, 3177.1, 4005204.8 00:18:50.470 complete (in ns) avg, min, max = 20297.5, 1774.3, 4001251.4 00:18:50.470 00:18:50.470 Submit histogram 00:18:50.470 ================ 00:18:50.470 Range in us Cumulative Count 00:18:50.470 3.170 - 3.185: 0.0371% ( 6) 00:18:50.470 3.185 - 3.200: 0.5934% ( 90) 00:18:50.470 3.200 - 3.215: 2.8988% ( 373) 00:18:50.470 3.215 - 3.230: 7.3738% ( 724) 00:18:50.470 3.230 - 3.246: 12.2999% ( 797) 00:18:50.470 3.246 - 3.261: 18.6044% ( 1020) 00:18:50.470 3.261 - 3.276: 26.2377% ( 1235) 00:18:50.470 3.276 - 3.291: 32.5545% ( 1022) 00:18:50.470 3.291 - 3.307: 37.6537% ( 825) 00:18:50.470 3.307 - 3.322: 42.9631% ( 859) 00:18:50.470 3.322 - 3.337: 48.0561% ( 824) 00:18:50.470 3.337 - 3.352: 51.6596% ( 583) 00:18:50.470 3.352 - 3.368: 57.3336% ( 918) 00:18:50.470 3.368 - 3.383: 63.6010% ( 1014) 00:18:50.470 3.383 - 3.398: 68.7002% ( 825) 00:18:50.470 3.398 - 3.413: 74.6029% ( 955) 00:18:50.471 3.413 - 3.429: 79.0840% ( 725) 00:18:50.471 3.429 - 3.444: 82.2177% ( 507) 00:18:50.471 3.444 - 3.459: 84.3130% ( 339) 00:18:50.471 3.459 - 3.474: 85.7531% ( 233) 00:18:50.471 3.474 - 3.490: 86.6864% ( 151) 00:18:50.471 3.490 - 3.505: 87.4714% ( 127) 00:18:50.471 3.505 - 3.520: 88.1698% ( 113) 00:18:50.471 3.520 - 3.535: 88.9301% ( 123) 00:18:50.471 3.535 - 3.550: 89.8510% ( 149) 00:18:50.471 3.550 - 3.566: 90.7905% ( 152) 00:18:50.471 3.566 - 3.581: 91.5940% ( 130) 00:18:50.471 3.581 - 3.596: 92.5583% ( 156) 00:18:50.471 3.596 - 3.611: 93.4359% ( 142) 00:18:50.471 3.611 - 3.627: 94.2765% ( 136) 00:18:50.471 3.627 - 3.642: 95.1419% ( 140) 00:18:50.471 3.642 - 3.657: 95.9639% ( 133) 00:18:50.471 3.657 - 3.672: 96.6871% ( 117) 00:18:50.471 3.672 - 3.688: 97.1754% ( 79) 00:18:50.471 3.688 - 3.703: 97.6636% ( 79) 00:18:50.471 3.703 - 3.718: 98.1148% ( 73) 00:18:50.471 3.718 - 3.733: 98.4239% ( 50) 00:18:50.471 3.733 - 3.749: 98.6464% ( 36) 00:18:50.471 3.749 - 3.764: 98.8751% ( 37) 00:18:50.471 3.764 - 3.779: 99.0481% ( 28) 00:18:50.471 3.779 - 3.794: 99.1532% ( 17) 00:18:50.471 3.794 - 3.810: 99.1903% ( 6) 00:18:50.471 3.810 - 3.825: 99.2521% ( 10) 00:18:50.471 3.825 - 3.840: 99.2768% ( 4) 00:18:50.471 3.840 - 3.855: 99.3510% ( 12) 00:18:50.471 3.855 - 3.870: 99.3757% ( 4) 00:18:50.471 3.870 - 3.886: 99.3943% ( 3) 00:18:50.471 3.886 - 3.901: 99.4128% ( 3) 00:18:50.471 3.901 - 3.931: 99.4561% ( 7) 00:18:50.471 3.931 - 3.962: 99.4932% ( 6) 00:18:50.471 3.962 - 3.992: 99.4994% ( 1) 00:18:50.471 3.992 - 4.023: 99.5550% ( 9) 00:18:50.471 4.023 - 4.053: 99.5797% ( 4) 00:18:50.471 4.053 - 4.084: 99.5859% ( 1) 00:18:50.471 4.084 - 4.114: 99.5921% ( 1) 00:18:50.471 4.114 - 4.145: 99.5982% ( 1) 00:18:50.471 4.145 - 4.175: 99.6044% ( 1) 00:18:50.471 4.206 - 4.236: 99.6106% ( 1) 00:18:50.471 4.236 - 4.267: 99.6230% ( 2) 00:18:50.471 4.297 - 4.328: 99.6353% ( 2) 00:18:50.471 4.724 - 4.754: 99.6477% ( 2) 00:18:50.471 5.150 - 5.181: 99.6601% ( 2) 00:18:50.471 5.211 - 5.242: 99.6662% ( 1) 00:18:50.471 5.242 - 5.272: 99.6724% ( 1) 00:18:50.471 5.333 - 5.364: 99.6786% ( 1) 00:18:50.471 5.547 - 5.577: 99.6848% ( 1) 00:18:50.471 5.577 - 5.608: 99.6910% ( 1) 00:18:50.471 5.638 - 5.669: 99.6971% ( 1) 00:18:50.471 5.699 - 5.730: 99.7033% ( 1) 00:18:50.471 5.730 - 5.760: 99.7095% ( 1) 00:18:50.471 5.760 - 5.790: 99.7280% ( 3) 00:18:50.471 5.851 - 5.882: 99.7466% ( 3) 00:18:50.471 5.882 - 5.912: 99.7528% ( 1) 00:18:50.471 5.943 - 5.973: 99.7589% ( 1) 00:18:50.471 6.034 - 6.065: 99.7651% ( 1) 00:18:50.471 6.065 - 6.095: 99.7775% ( 2) 00:18:50.471 6.126 - 6.156: 99.7899% ( 2) 00:18:50.471 6.187 - 6.217: 99.7960% ( 1) 00:18:50.471 6.248 - 6.278: 99.8022% ( 1) 00:18:50.471 [2024-12-15 06:09:10.522373] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.471 6.278 - 6.309: 99.8084% ( 1) 00:18:50.471 6.339 - 6.370: 99.8269% ( 3) 00:18:50.471 6.370 - 6.400: 99.8393% ( 2) 00:18:50.471 6.400 - 6.430: 99.8455% ( 1) 00:18:50.471 6.583 - 6.613: 99.8517% ( 1) 00:18:50.471 6.735 - 6.766: 99.8578% ( 1) 00:18:50.471 6.949 - 6.979: 99.8640% ( 1) 00:18:50.471 7.284 - 7.314: 99.8702% ( 1) 00:18:50.471 7.314 - 7.345: 99.8764% ( 1) 00:18:50.471 7.345 - 7.375: 99.8826% ( 1) 00:18:50.471 7.863 - 7.924: 99.8887% ( 1) 00:18:50.471 8.107 - 8.168: 99.8949% ( 1) 00:18:50.471 9.813 - 9.874: 99.9011% ( 1) 00:18:50.471 13.958 - 14.019: 99.9073% ( 1) 00:18:50.471 48.762 - 49.006: 99.9135% ( 1) 00:18:50.471 3370.423 - 3386.027: 99.9196% ( 1) 00:18:50.471 3994.575 - 4025.783: 100.0000% ( 13) 00:18:50.471 00:18:50.471 Complete histogram 00:18:50.471 ================== 00:18:50.471 Range in us Cumulative Count 00:18:50.471 1.768 - 1.775: 0.0124% ( 2) 00:18:50.471 1.775 - 1.783: 1.0137% ( 162) 00:18:50.471 1.783 - 1.790: 10.0253% ( 1458) 00:18:50.471 1.790 - 1.798: 32.3320% ( 3609) 00:18:50.471 1.798 - 1.806: 52.6732% ( 3291) 00:18:50.471 1.806 - 1.813: 61.7714% ( 1472) 00:18:50.471 1.813 - 1.821: 65.0967% ( 538) 00:18:50.471 1.821 - 1.829: 68.0079% ( 471) 00:18:50.471 1.829 - 1.836: 74.3186% ( 1021) 00:18:50.471 1.836 - 1.844: 82.5267% ( 1328) 00:18:50.471 1.844 - 1.851: 88.8930% ( 1030) 00:18:50.471 1.851 - 1.859: 92.3234% ( 555) 00:18:50.471 1.859 - 1.867: 94.1282% ( 292) 00:18:50.471 1.867 - 1.874: 95.2902% ( 188) 00:18:50.471 1.874 - 1.882: 96.0195% ( 118) 00:18:50.471 1.882 - 1.890: 96.5758% ( 90) 00:18:50.471 1.890 - 1.897: 96.7798% ( 33) 00:18:50.471 1.897 - 1.905: 97.2310% ( 73) 00:18:50.471 1.905 - 1.912: 97.7749% ( 88) 00:18:50.471 1.912 - 1.920: 98.2076% ( 70) 00:18:50.471 1.920 - 1.928: 98.4610% ( 41) 00:18:50.471 1.928 - 1.935: 98.6588% ( 32) 00:18:50.471 1.935 - 1.943: 98.7206% ( 10) 00:18:50.471 1.943 - 1.950: 98.7576% ( 6) 00:18:50.471 1.950 - 1.966: 98.8380% ( 13) 00:18:50.471 1.966 - 1.981: 98.8998% ( 10) 00:18:50.471 1.981 - 1.996: 98.9431% ( 7) 00:18:50.471 1.996 - 2.011: 98.9616% ( 3) 00:18:50.471 2.011 - 2.027: 98.9802% ( 3) 00:18:50.471 2.027 - 2.042: 99.0111% ( 5) 00:18:50.471 2.042 - 2.057: 99.0296% ( 3) 00:18:50.471 2.057 - 2.072: 99.0481% ( 3) 00:18:50.471 2.072 - 2.088: 99.0543% ( 1) 00:18:50.471 2.088 - 2.103: 99.0729% ( 3) 00:18:50.471 2.149 - 2.164: 99.0852% ( 2) 00:18:50.471 2.164 - 2.179: 99.1594% ( 12) 00:18:50.471 2.179 - 2.194: 99.2336% ( 12) 00:18:50.471 2.194 - 2.210: 99.2583% ( 4) 00:18:50.471 2.210 - 2.225: 99.2830% ( 4) 00:18:50.471 2.225 - 2.240: 99.3201% ( 6) 00:18:50.471 2.240 - 2.255: 99.3510% ( 5) 00:18:50.471 2.270 - 2.286: 99.3572% ( 1) 00:18:50.471 2.301 - 2.316: 99.3634% ( 1) 00:18:50.471 2.331 - 2.347: 99.3757% ( 2) 00:18:50.471 2.347 - 2.362: 99.3881% ( 2) 00:18:50.471 2.514 - 2.530: 99.3943% ( 1) 00:18:50.471 2.728 - 2.743: 99.4005% ( 1) 00:18:50.471 2.850 - 2.865: 99.4066% ( 1) 00:18:50.471 2.956 - 2.971: 99.4128% ( 1) 00:18:50.471 3.017 - 3.032: 99.4190% ( 1) 00:18:50.471 4.053 - 4.084: 99.4314% ( 2) 00:18:50.471 4.084 - 4.114: 99.4375% ( 1) 00:18:50.471 4.114 - 4.145: 99.4561% ( 3) 00:18:50.471 4.145 - 4.175: 99.4623% ( 1) 00:18:50.471 4.358 - 4.389: 99.4684% ( 1) 00:18:50.471 4.510 - 4.541: 99.4746% ( 1) 00:18:50.471 4.541 - 4.571: 99.4808% ( 1) 00:18:50.472 4.724 - 4.754: 99.4870% ( 1) 00:18:50.472 4.754 - 4.785: 99.4932% ( 1) 00:18:50.472 4.785 - 4.815: 99.4994% ( 1) 00:18:50.472 4.998 - 5.029: 99.5055% ( 1) 00:18:50.472 5.150 - 5.181: 99.5117% ( 1) 00:18:50.472 5.638 - 5.669: 99.5179% ( 1) 00:18:50.472 6.004 - 6.034: 99.5241% ( 1) 00:18:50.472 6.888 - 6.918: 99.5303% ( 1) 00:18:50.472 10.179 - 10.240: 99.5364% ( 1) 00:18:50.472 3370.423 - 3386.027: 99.5426% ( 1) 00:18:50.472 3994.575 - 4025.783: 100.0000% ( 74) 00:18:50.472 00:18:50.472 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:50.472 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:50.472 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:50.472 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:50.472 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:50.731 [ 00:18:50.731 { 00:18:50.731 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:50.731 "subtype": "Discovery", 00:18:50.731 "listen_addresses": [], 00:18:50.731 "allow_any_host": true, 00:18:50.731 "hosts": [] 00:18:50.731 }, 00:18:50.731 { 00:18:50.731 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:50.731 "subtype": "NVMe", 00:18:50.731 "listen_addresses": [ 00:18:50.731 { 00:18:50.731 "trtype": "VFIOUSER", 00:18:50.731 "adrfam": "IPv4", 00:18:50.731 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:50.731 "trsvcid": "0" 00:18:50.731 } 00:18:50.731 ], 00:18:50.731 "allow_any_host": true, 00:18:50.731 "hosts": [], 00:18:50.731 "serial_number": "SPDK1", 00:18:50.731 "model_number": "SPDK bdev Controller", 00:18:50.731 "max_namespaces": 32, 00:18:50.731 "min_cntlid": 1, 00:18:50.731 "max_cntlid": 65519, 00:18:50.731 "namespaces": [ 00:18:50.731 { 00:18:50.731 "nsid": 1, 00:18:50.731 "bdev_name": "Malloc1", 00:18:50.731 "name": "Malloc1", 00:18:50.731 "nguid": "2BDAF1B0F0004B7E98E075429349F85F", 00:18:50.731 "uuid": "2bdaf1b0-f000-4b7e-98e0-75429349f85f" 00:18:50.731 } 00:18:50.731 ] 00:18:50.731 }, 00:18:50.731 { 00:18:50.731 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:50.731 "subtype": "NVMe", 00:18:50.731 "listen_addresses": [ 00:18:50.731 { 00:18:50.731 "trtype": "VFIOUSER", 00:18:50.731 "adrfam": "IPv4", 00:18:50.731 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:50.731 "trsvcid": "0" 00:18:50.731 } 00:18:50.731 ], 00:18:50.731 "allow_any_host": true, 00:18:50.731 "hosts": [], 00:18:50.731 "serial_number": "SPDK2", 00:18:50.731 "model_number": "SPDK bdev Controller", 00:18:50.731 "max_namespaces": 32, 00:18:50.731 "min_cntlid": 1, 00:18:50.731 "max_cntlid": 65519, 00:18:50.731 "namespaces": [ 00:18:50.731 { 00:18:50.731 "nsid": 1, 00:18:50.731 "bdev_name": "Malloc2", 00:18:50.731 "name": "Malloc2", 00:18:50.731 "nguid": "4154BDB94A1F4EA1A0C2BAB3F710548B", 00:18:50.731 "uuid": "4154bdb9-4a1f-4ea1-a0c2-bab3f710548b" 00:18:50.731 } 00:18:50.731 ] 00:18:50.731 } 00:18:50.731 ] 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=968623 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:50.731 06:09:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:50.991 [2024-12-15 06:09:10.945411] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.991 Malloc3 00:18:50.991 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:51.250 [2024-12-15 06:09:11.180175] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:51.250 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:51.250 Asynchronous Event Request test 00:18:51.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:51.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:51.250 Registering asynchronous event callbacks... 00:18:51.250 Starting namespace attribute notice tests for all controllers... 00:18:51.250 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:51.250 aer_cb - Changed Namespace 00:18:51.250 Cleaning up... 00:18:51.250 [ 00:18:51.250 { 00:18:51.250 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:51.250 "subtype": "Discovery", 00:18:51.250 "listen_addresses": [], 00:18:51.250 "allow_any_host": true, 00:18:51.250 "hosts": [] 00:18:51.250 }, 00:18:51.250 { 00:18:51.250 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:51.250 "subtype": "NVMe", 00:18:51.250 "listen_addresses": [ 00:18:51.250 { 00:18:51.250 "trtype": "VFIOUSER", 00:18:51.250 "adrfam": "IPv4", 00:18:51.250 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:51.250 "trsvcid": "0" 00:18:51.250 } 00:18:51.250 ], 00:18:51.251 "allow_any_host": true, 00:18:51.251 "hosts": [], 00:18:51.251 "serial_number": "SPDK1", 00:18:51.251 "model_number": "SPDK bdev Controller", 00:18:51.251 "max_namespaces": 32, 00:18:51.251 "min_cntlid": 1, 00:18:51.251 "max_cntlid": 65519, 00:18:51.251 "namespaces": [ 00:18:51.251 { 00:18:51.251 "nsid": 1, 00:18:51.251 "bdev_name": "Malloc1", 00:18:51.251 "name": "Malloc1", 00:18:51.251 "nguid": "2BDAF1B0F0004B7E98E075429349F85F", 00:18:51.251 "uuid": "2bdaf1b0-f000-4b7e-98e0-75429349f85f" 00:18:51.251 }, 00:18:51.251 { 00:18:51.251 "nsid": 2, 00:18:51.251 "bdev_name": "Malloc3", 00:18:51.251 "name": "Malloc3", 00:18:51.251 "nguid": "6D73AE1D79054A8980CA3C3C32D201A2", 00:18:51.251 "uuid": "6d73ae1d-7905-4a89-80ca-3c3c32d201a2" 00:18:51.251 } 00:18:51.251 ] 00:18:51.251 }, 00:18:51.251 { 00:18:51.251 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:51.251 "subtype": "NVMe", 00:18:51.251 "listen_addresses": [ 00:18:51.251 { 00:18:51.251 "trtype": "VFIOUSER", 00:18:51.251 "adrfam": "IPv4", 00:18:51.251 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:51.251 "trsvcid": "0" 00:18:51.251 } 00:18:51.251 ], 00:18:51.251 "allow_any_host": true, 00:18:51.251 "hosts": [], 00:18:51.251 "serial_number": "SPDK2", 00:18:51.251 "model_number": "SPDK bdev Controller", 00:18:51.251 "max_namespaces": 32, 00:18:51.251 "min_cntlid": 1, 00:18:51.251 "max_cntlid": 65519, 00:18:51.251 "namespaces": [ 00:18:51.251 { 00:18:51.251 "nsid": 1, 00:18:51.251 "bdev_name": "Malloc2", 00:18:51.251 "name": "Malloc2", 00:18:51.251 "nguid": "4154BDB94A1F4EA1A0C2BAB3F710548B", 00:18:51.251 "uuid": "4154bdb9-4a1f-4ea1-a0c2-bab3f710548b" 00:18:51.251 } 00:18:51.251 ] 00:18:51.251 } 00:18:51.251 ] 00:18:51.512 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 968623 00:18:51.512 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:51.512 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:51.512 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:51.512 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:51.512 [2024-12-15 06:09:11.406924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:51.512 [2024-12-15 06:09:11.406950] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968681 ] 00:18:51.512 [2024-12-15 06:09:11.448306] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:51.512 [2024-12-15 06:09:11.451233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:51.512 [2024-12-15 06:09:11.451252] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f601f6de000 00:18:51.512 [2024-12-15 06:09:11.452234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.453237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.454244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.455245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.456256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.457260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.458273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.459282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:51.512 [2024-12-15 06:09:11.460288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:51.512 [2024-12-15 06:09:11.460300] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f601e3e8000 00:18:51.512 [2024-12-15 06:09:11.461213] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:51.512 [2024-12-15 06:09:11.469567] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:51.513 [2024-12-15 06:09:11.469594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:51.513 [2024-12-15 06:09:11.474675] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:51.513 [2024-12-15 06:09:11.474709] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:51.513 [2024-12-15 06:09:11.474778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:51.513 [2024-12-15 06:09:11.474793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:51.513 [2024-12-15 06:09:11.474798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:51.513 [2024-12-15 06:09:11.475678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:51.513 [2024-12-15 06:09:11.475688] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:51.513 [2024-12-15 06:09:11.475694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:51.513 [2024-12-15 06:09:11.476685] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:51.513 [2024-12-15 06:09:11.476693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:51.513 [2024-12-15 06:09:11.476700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.477697] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:51.513 [2024-12-15 06:09:11.477706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.478700] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:51.513 [2024-12-15 06:09:11.478708] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:51.513 [2024-12-15 06:09:11.478713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.478719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.478826] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:51.513 [2024-12-15 06:09:11.478830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.478837] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:51.513 [2024-12-15 06:09:11.479709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:51.513 [2024-12-15 06:09:11.480715] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:51.513 [2024-12-15 06:09:11.481718] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:51.513 [2024-12-15 06:09:11.482726] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.513 [2024-12-15 06:09:11.482764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:51.513 [2024-12-15 06:09:11.483732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:51.513 [2024-12-15 06:09:11.483740] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:51.513 [2024-12-15 06:09:11.483745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.483762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:51.513 [2024-12-15 06:09:11.483769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.483779] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:51.513 [2024-12-15 06:09:11.483783] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.513 [2024-12-15 06:09:11.483786] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.513 [2024-12-15 06:09:11.483797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.489999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.490009] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:51.513 [2024-12-15 06:09:11.490013] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:51.513 [2024-12-15 06:09:11.490033] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:51.513 [2024-12-15 06:09:11.490038] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:51.513 [2024-12-15 06:09:11.490042] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:51.513 [2024-12-15 06:09:11.490046] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:51.513 [2024-12-15 06:09:11.490050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.490059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.490070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.497997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.498008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.513 [2024-12-15 06:09:11.498016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.513 [2024-12-15 06:09:11.498023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.513 [2024-12-15 06:09:11.498030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.513 [2024-12-15 06:09:11.498034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.498042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.498050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.505997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.506005] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:51.513 [2024-12-15 06:09:11.506010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.506016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.506021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.506029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.513995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.514046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.514055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.514062] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:51.513 [2024-12-15 06:09:11.514066] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:51.513 [2024-12-15 06:09:11.514069] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.513 [2024-12-15 06:09:11.514075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.521998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.522007] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:51.513 [2024-12-15 06:09:11.522017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.522023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.522032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:51.513 [2024-12-15 06:09:11.522036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.513 [2024-12-15 06:09:11.522039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.513 [2024-12-15 06:09:11.522045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.513 [2024-12-15 06:09:11.529997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:51.513 [2024-12-15 06:09:11.530011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.530019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:51.513 [2024-12-15 06:09:11.530025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:51.513 [2024-12-15 06:09:11.530029] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.513 [2024-12-15 06:09:11.530032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.514 [2024-12-15 06:09:11.530038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.537997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.538006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538039] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:51.514 [2024-12-15 06:09:11.538043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:51.514 [2024-12-15 06:09:11.538048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:51.514 [2024-12-15 06:09:11.538063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.545996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.546008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.553998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.554011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.561997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.562009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.570010] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:51.514 [2024-12-15 06:09:11.570015] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:51.514 [2024-12-15 06:09:11.570018] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:51.514 [2024-12-15 06:09:11.570021] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:51.514 [2024-12-15 06:09:11.570024] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:51.514 [2024-12-15 06:09:11.570030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:51.514 [2024-12-15 06:09:11.570036] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:51.514 [2024-12-15 06:09:11.570040] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:51.514 [2024-12-15 06:09:11.570043] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.514 [2024-12-15 06:09:11.570049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.570054] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:51.514 [2024-12-15 06:09:11.570059] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:51.514 [2024-12-15 06:09:11.570062] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.514 [2024-12-15 06:09:11.570067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.570073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:51.514 [2024-12-15 06:09:11.570077] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:51.514 [2024-12-15 06:09:11.570080] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:51.514 [2024-12-15 06:09:11.570086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:51.514 [2024-12-15 06:09:11.577997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.578010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.578019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:51.514 [2024-12-15 06:09:11.578025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:51.514 ===================================================== 00:18:51.514 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:51.514 ===================================================== 00:18:51.514 Controller Capabilities/Features 00:18:51.514 ================================ 00:18:51.514 Vendor ID: 4e58 00:18:51.514 Subsystem Vendor ID: 4e58 00:18:51.514 Serial Number: SPDK2 00:18:51.514 Model Number: SPDK bdev Controller 00:18:51.514 Firmware Version: 25.01 00:18:51.514 Recommended Arb Burst: 6 00:18:51.514 IEEE OUI Identifier: 8d 6b 50 00:18:51.514 Multi-path I/O 00:18:51.514 May have multiple subsystem ports: Yes 00:18:51.514 May have multiple controllers: Yes 00:18:51.514 Associated with SR-IOV VF: No 00:18:51.514 Max Data Transfer Size: 131072 00:18:51.514 Max Number of Namespaces: 32 00:18:51.514 Max Number of I/O Queues: 127 00:18:51.514 NVMe Specification Version (VS): 1.3 00:18:51.514 NVMe Specification Version (Identify): 1.3 00:18:51.514 Maximum Queue Entries: 256 00:18:51.514 Contiguous Queues Required: Yes 00:18:51.514 Arbitration Mechanisms Supported 00:18:51.514 Weighted Round Robin: Not Supported 00:18:51.514 Vendor Specific: Not Supported 00:18:51.514 Reset Timeout: 15000 ms 00:18:51.514 Doorbell Stride: 4 bytes 00:18:51.514 NVM Subsystem Reset: Not Supported 00:18:51.514 Command Sets Supported 00:18:51.514 NVM Command Set: Supported 00:18:51.514 Boot Partition: Not Supported 00:18:51.514 Memory Page Size Minimum: 4096 bytes 00:18:51.514 Memory Page Size Maximum: 4096 bytes 00:18:51.514 Persistent Memory Region: Not Supported 00:18:51.514 Optional Asynchronous Events Supported 00:18:51.514 Namespace Attribute Notices: Supported 00:18:51.514 Firmware Activation Notices: Not Supported 00:18:51.514 ANA Change Notices: Not Supported 00:18:51.514 PLE Aggregate Log Change Notices: Not Supported 00:18:51.514 LBA Status Info Alert Notices: Not Supported 00:18:51.514 EGE Aggregate Log Change Notices: Not Supported 00:18:51.514 Normal NVM Subsystem Shutdown event: Not Supported 00:18:51.514 Zone Descriptor Change Notices: Not Supported 00:18:51.514 Discovery Log Change Notices: Not Supported 00:18:51.514 Controller Attributes 00:18:51.514 128-bit Host Identifier: Supported 00:18:51.514 Non-Operational Permissive Mode: Not Supported 00:18:51.514 NVM Sets: Not Supported 00:18:51.514 Read Recovery Levels: Not Supported 00:18:51.514 Endurance Groups: Not Supported 00:18:51.514 Predictable Latency Mode: Not Supported 00:18:51.514 Traffic Based Keep ALive: Not Supported 00:18:51.514 Namespace Granularity: Not Supported 00:18:51.514 SQ Associations: Not Supported 00:18:51.514 UUID List: Not Supported 00:18:51.514 Multi-Domain Subsystem: Not Supported 00:18:51.514 Fixed Capacity Management: Not Supported 00:18:51.514 Variable Capacity Management: Not Supported 00:18:51.514 Delete Endurance Group: Not Supported 00:18:51.514 Delete NVM Set: Not Supported 00:18:51.514 Extended LBA Formats Supported: Not Supported 00:18:51.514 Flexible Data Placement Supported: Not Supported 00:18:51.514 00:18:51.514 Controller Memory Buffer Support 00:18:51.514 ================================ 00:18:51.514 Supported: No 00:18:51.514 00:18:51.514 Persistent Memory Region Support 00:18:51.514 ================================ 00:18:51.514 Supported: No 00:18:51.514 00:18:51.514 Admin Command Set Attributes 00:18:51.514 ============================ 00:18:51.514 Security Send/Receive: Not Supported 00:18:51.514 Format NVM: Not Supported 00:18:51.514 Firmware Activate/Download: Not Supported 00:18:51.514 Namespace Management: Not Supported 00:18:51.514 Device Self-Test: Not Supported 00:18:51.514 Directives: Not Supported 00:18:51.514 NVMe-MI: Not Supported 00:18:51.514 Virtualization Management: Not Supported 00:18:51.514 Doorbell Buffer Config: Not Supported 00:18:51.514 Get LBA Status Capability: Not Supported 00:18:51.514 Command & Feature Lockdown Capability: Not Supported 00:18:51.514 Abort Command Limit: 4 00:18:51.514 Async Event Request Limit: 4 00:18:51.514 Number of Firmware Slots: N/A 00:18:51.514 Firmware Slot 1 Read-Only: N/A 00:18:51.514 Firmware Activation Without Reset: N/A 00:18:51.514 Multiple Update Detection Support: N/A 00:18:51.514 Firmware Update Granularity: No Information Provided 00:18:51.514 Per-Namespace SMART Log: No 00:18:51.514 Asymmetric Namespace Access Log Page: Not Supported 00:18:51.514 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:51.514 Command Effects Log Page: Supported 00:18:51.514 Get Log Page Extended Data: Supported 00:18:51.514 Telemetry Log Pages: Not Supported 00:18:51.514 Persistent Event Log Pages: Not Supported 00:18:51.514 Supported Log Pages Log Page: May Support 00:18:51.514 Commands Supported & Effects Log Page: Not Supported 00:18:51.515 Feature Identifiers & Effects Log Page:May Support 00:18:51.515 NVMe-MI Commands & Effects Log Page: May Support 00:18:51.515 Data Area 4 for Telemetry Log: Not Supported 00:18:51.515 Error Log Page Entries Supported: 128 00:18:51.515 Keep Alive: Supported 00:18:51.515 Keep Alive Granularity: 10000 ms 00:18:51.515 00:18:51.515 NVM Command Set Attributes 00:18:51.515 ========================== 00:18:51.515 Submission Queue Entry Size 00:18:51.515 Max: 64 00:18:51.515 Min: 64 00:18:51.515 Completion Queue Entry Size 00:18:51.515 Max: 16 00:18:51.515 Min: 16 00:18:51.515 Number of Namespaces: 32 00:18:51.515 Compare Command: Supported 00:18:51.515 Write Uncorrectable Command: Not Supported 00:18:51.515 Dataset Management Command: Supported 00:18:51.515 Write Zeroes Command: Supported 00:18:51.515 Set Features Save Field: Not Supported 00:18:51.515 Reservations: Not Supported 00:18:51.515 Timestamp: Not Supported 00:18:51.515 Copy: Supported 00:18:51.515 Volatile Write Cache: Present 00:18:51.515 Atomic Write Unit (Normal): 1 00:18:51.515 Atomic Write Unit (PFail): 1 00:18:51.515 Atomic Compare & Write Unit: 1 00:18:51.515 Fused Compare & Write: Supported 00:18:51.515 Scatter-Gather List 00:18:51.515 SGL Command Set: Supported (Dword aligned) 00:18:51.515 SGL Keyed: Not Supported 00:18:51.515 SGL Bit Bucket Descriptor: Not Supported 00:18:51.515 SGL Metadata Pointer: Not Supported 00:18:51.515 Oversized SGL: Not Supported 00:18:51.515 SGL Metadata Address: Not Supported 00:18:51.515 SGL Offset: Not Supported 00:18:51.515 Transport SGL Data Block: Not Supported 00:18:51.515 Replay Protected Memory Block: Not Supported 00:18:51.515 00:18:51.515 Firmware Slot Information 00:18:51.515 ========================= 00:18:51.515 Active slot: 1 00:18:51.515 Slot 1 Firmware Revision: 25.01 00:18:51.515 00:18:51.515 00:18:51.515 Commands Supported and Effects 00:18:51.515 ============================== 00:18:51.515 Admin Commands 00:18:51.515 -------------- 00:18:51.515 Get Log Page (02h): Supported 00:18:51.515 Identify (06h): Supported 00:18:51.515 Abort (08h): Supported 00:18:51.515 Set Features (09h): Supported 00:18:51.515 Get Features (0Ah): Supported 00:18:51.515 Asynchronous Event Request (0Ch): Supported 00:18:51.515 Keep Alive (18h): Supported 00:18:51.515 I/O Commands 00:18:51.515 ------------ 00:18:51.515 Flush (00h): Supported LBA-Change 00:18:51.515 Write (01h): Supported LBA-Change 00:18:51.515 Read (02h): Supported 00:18:51.515 Compare (05h): Supported 00:18:51.515 Write Zeroes (08h): Supported LBA-Change 00:18:51.515 Dataset Management (09h): Supported LBA-Change 00:18:51.515 Copy (19h): Supported LBA-Change 00:18:51.515 00:18:51.515 Error Log 00:18:51.515 ========= 00:18:51.515 00:18:51.515 Arbitration 00:18:51.515 =========== 00:18:51.515 Arbitration Burst: 1 00:18:51.515 00:18:51.515 Power Management 00:18:51.515 ================ 00:18:51.515 Number of Power States: 1 00:18:51.515 Current Power State: Power State #0 00:18:51.515 Power State #0: 00:18:51.515 Max Power: 0.00 W 00:18:51.515 Non-Operational State: Operational 00:18:51.515 Entry Latency: Not Reported 00:18:51.515 Exit Latency: Not Reported 00:18:51.515 Relative Read Throughput: 0 00:18:51.515 Relative Read Latency: 0 00:18:51.515 Relative Write Throughput: 0 00:18:51.515 Relative Write Latency: 0 00:18:51.515 Idle Power: Not Reported 00:18:51.515 Active Power: Not Reported 00:18:51.515 Non-Operational Permissive Mode: Not Supported 00:18:51.515 00:18:51.515 Health Information 00:18:51.515 ================== 00:18:51.515 Critical Warnings: 00:18:51.515 Available Spare Space: OK 00:18:51.515 Temperature: OK 00:18:51.515 Device Reliability: OK 00:18:51.515 Read Only: No 00:18:51.515 Volatile Memory Backup: OK 00:18:51.515 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:51.515 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:51.515 Available Spare: 0% 00:18:51.515 Available Sp[2024-12-15 06:09:11.578112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:51.515 [2024-12-15 06:09:11.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:51.515 [2024-12-15 06:09:11.586026] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:51.515 [2024-12-15 06:09:11.586036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.515 [2024-12-15 06:09:11.586042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.515 [2024-12-15 06:09:11.586048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.515 [2024-12-15 06:09:11.586053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.515 [2024-12-15 06:09:11.586108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:51.515 [2024-12-15 06:09:11.586119] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:51.515 [2024-12-15 06:09:11.587111] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.515 [2024-12-15 06:09:11.587154] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:51.515 [2024-12-15 06:09:11.587160] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:51.515 [2024-12-15 06:09:11.588116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:51.515 [2024-12-15 06:09:11.588126] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:51.515 [2024-12-15 06:09:11.588173] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:51.515 [2024-12-15 06:09:11.590996] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:51.515 are Threshold: 0% 00:18:51.515 Life Percentage Used: 0% 00:18:51.515 Data Units Read: 0 00:18:51.515 Data Units Written: 0 00:18:51.515 Host Read Commands: 0 00:18:51.515 Host Write Commands: 0 00:18:51.515 Controller Busy Time: 0 minutes 00:18:51.515 Power Cycles: 0 00:18:51.515 Power On Hours: 0 hours 00:18:51.515 Unsafe Shutdowns: 0 00:18:51.515 Unrecoverable Media Errors: 0 00:18:51.515 Lifetime Error Log Entries: 0 00:18:51.515 Warning Temperature Time: 0 minutes 00:18:51.515 Critical Temperature Time: 0 minutes 00:18:51.515 00:18:51.515 Number of Queues 00:18:51.515 ================ 00:18:51.515 Number of I/O Submission Queues: 127 00:18:51.515 Number of I/O Completion Queues: 127 00:18:51.515 00:18:51.515 Active Namespaces 00:18:51.515 ================= 00:18:51.515 Namespace ID:1 00:18:51.515 Error Recovery Timeout: Unlimited 00:18:51.515 Command Set Identifier: NVM (00h) 00:18:51.515 Deallocate: Supported 00:18:51.515 Deallocated/Unwritten Error: Not Supported 00:18:51.515 Deallocated Read Value: Unknown 00:18:51.515 Deallocate in Write Zeroes: Not Supported 00:18:51.515 Deallocated Guard Field: 0xFFFF 00:18:51.515 Flush: Supported 00:18:51.515 Reservation: Supported 00:18:51.515 Namespace Sharing Capabilities: Multiple Controllers 00:18:51.515 Size (in LBAs): 131072 (0GiB) 00:18:51.515 Capacity (in LBAs): 131072 (0GiB) 00:18:51.515 Utilization (in LBAs): 131072 (0GiB) 00:18:51.515 NGUID: 4154BDB94A1F4EA1A0C2BAB3F710548B 00:18:51.515 UUID: 4154bdb9-4a1f-4ea1-a0c2-bab3f710548b 00:18:51.515 Thin Provisioning: Not Supported 00:18:51.515 Per-NS Atomic Units: Yes 00:18:51.515 Atomic Boundary Size (Normal): 0 00:18:51.515 Atomic Boundary Size (PFail): 0 00:18:51.515 Atomic Boundary Offset: 0 00:18:51.515 Maximum Single Source Range Length: 65535 00:18:51.515 Maximum Copy Length: 65535 00:18:51.515 Maximum Source Range Count: 1 00:18:51.515 NGUID/EUI64 Never Reused: No 00:18:51.515 Namespace Write Protected: No 00:18:51.515 Number of LBA Formats: 1 00:18:51.515 Current LBA Format: LBA Format #00 00:18:51.515 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:51.515 00:18:51.515 06:09:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:51.775 [2024-12-15 06:09:11.808189] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.051 Initializing NVMe Controllers 00:18:57.051 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:57.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:57.051 Initialization complete. Launching workers. 00:18:57.051 ======================================================== 00:18:57.051 Latency(us) 00:18:57.051 Device Information : IOPS MiB/s Average min max 00:18:57.051 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.27 156.08 3203.27 959.86 8609.38 00:18:57.051 ======================================================== 00:18:57.051 Total : 39957.27 156.08 3203.27 959.86 8609.38 00:18:57.051 00:18:57.051 [2024-12-15 06:09:16.912266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.051 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:57.052 [2024-12-15 06:09:17.143960] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:02.324 Initializing NVMe Controllers 00:19:02.324 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.324 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:02.324 Initialization complete. Launching workers. 00:19:02.324 ======================================================== 00:19:02.324 Latency(us) 00:19:02.324 Device Information : IOPS MiB/s Average min max 00:19:02.324 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39932.99 155.99 3205.20 970.95 10354.17 00:19:02.324 ======================================================== 00:19:02.324 Total : 39932.99 155.99 3205.20 970.95 10354.17 00:19:02.324 00:19:02.324 [2024-12-15 06:09:22.160941] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.324 06:09:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:02.324 [2024-12-15 06:09:22.361195] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.596 [2024-12-15 06:09:27.498102] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.596 Initializing NVMe Controllers 00:19:07.596 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.596 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.596 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:07.596 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:07.596 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:07.596 Initialization complete. Launching workers. 00:19:07.596 Starting thread on core 2 00:19:07.596 Starting thread on core 3 00:19:07.596 Starting thread on core 1 00:19:07.596 06:09:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:07.855 [2024-12-15 06:09:27.781781] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:11.151 [2024-12-15 06:09:30.848160] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:11.151 Initializing NVMe Controllers 00:19:11.151 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.151 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:11.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:11.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:11.151 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:11.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:11.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:11.151 Initialization complete. Launching workers. 00:19:11.151 Starting thread on core 1 with urgent priority queue 00:19:11.151 Starting thread on core 2 with urgent priority queue 00:19:11.151 Starting thread on core 3 with urgent priority queue 00:19:11.151 Starting thread on core 0 with urgent priority queue 00:19:11.151 SPDK bdev Controller (SPDK2 ) core 0: 6219.33 IO/s 16.08 secs/100000 ios 00:19:11.151 SPDK bdev Controller (SPDK2 ) core 1: 5102.67 IO/s 19.60 secs/100000 ios 00:19:11.151 SPDK bdev Controller (SPDK2 ) core 2: 6476.67 IO/s 15.44 secs/100000 ios 00:19:11.151 SPDK bdev Controller (SPDK2 ) core 3: 5136.33 IO/s 19.47 secs/100000 ios 00:19:11.151 ======================================================== 00:19:11.151 00:19:11.151 06:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:11.151 [2024-12-15 06:09:31.130860] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:11.151 Initializing NVMe Controllers 00:19:11.151 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.151 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:11.151 Namespace ID: 1 size: 0GB 00:19:11.151 Initialization complete. 00:19:11.151 INFO: using host memory buffer for IO 00:19:11.151 Hello world! 00:19:11.151 [2024-12-15 06:09:31.140934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:11.151 06:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:11.501 [2024-12-15 06:09:31.420769] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.509 Initializing NVMe Controllers 00:19:12.509 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.509 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:12.509 Initialization complete. Launching workers. 00:19:12.509 submit (in ns) avg, min, max = 6134.4, 3141.9, 3999740.0 00:19:12.509 complete (in ns) avg, min, max = 17446.2, 1716.2, 3998194.3 00:19:12.509 00:19:12.509 Submit histogram 00:19:12.509 ================ 00:19:12.509 Range in us Cumulative Count 00:19:12.509 3.139 - 3.154: 0.0124% ( 2) 00:19:12.509 3.154 - 3.170: 0.0186% ( 1) 00:19:12.509 3.170 - 3.185: 0.0497% ( 5) 00:19:12.509 3.185 - 3.200: 0.1181% ( 11) 00:19:12.509 3.200 - 3.215: 0.3418% ( 36) 00:19:12.509 3.215 - 3.230: 0.7395% ( 64) 00:19:12.509 3.230 - 3.246: 1.6345% ( 144) 00:19:12.509 3.246 - 3.261: 4.2695% ( 424) 00:19:12.509 3.261 - 3.276: 9.4773% ( 838) 00:19:12.509 3.276 - 3.291: 15.4869% ( 967) 00:19:12.509 3.291 - 3.307: 21.9191% ( 1035) 00:19:12.509 3.307 - 3.322: 29.8987% ( 1284) 00:19:12.509 3.322 - 3.337: 37.2631% ( 1185) 00:19:12.509 3.337 - 3.352: 43.0054% ( 924) 00:19:12.509 3.352 - 3.368: 47.9398% ( 794) 00:19:12.509 3.368 - 3.383: 52.2839% ( 699) 00:19:12.509 3.383 - 3.398: 56.0935% ( 613) 00:19:12.509 3.398 - 3.413: 60.7793% ( 754) 00:19:12.509 3.413 - 3.429: 68.3363% ( 1216) 00:19:12.509 3.429 - 3.444: 73.8425% ( 886) 00:19:12.509 3.444 - 3.459: 78.7210% ( 785) 00:19:12.509 3.459 - 3.474: 83.3385% ( 743) 00:19:12.509 3.474 - 3.490: 86.2656% ( 471) 00:19:12.509 3.490 - 3.505: 87.6391% ( 221) 00:19:12.509 3.505 - 3.520: 88.2667% ( 101) 00:19:12.509 3.520 - 3.535: 88.6272% ( 58) 00:19:12.509 3.535 - 3.550: 89.0249% ( 64) 00:19:12.509 3.550 - 3.566: 89.6091% ( 94) 00:19:12.509 3.566 - 3.581: 90.4916% ( 142) 00:19:12.509 3.581 - 3.596: 91.5356% ( 168) 00:19:12.509 3.596 - 3.611: 92.4989% ( 155) 00:19:12.509 3.611 - 3.627: 93.4063% ( 146) 00:19:12.509 3.627 - 3.642: 94.1769% ( 124) 00:19:12.509 3.642 - 3.657: 94.9599% ( 126) 00:19:12.509 3.657 - 3.672: 95.9418% ( 158) 00:19:12.509 3.672 - 3.688: 96.6192% ( 109) 00:19:12.509 3.688 - 3.703: 97.3774% ( 122) 00:19:12.509 3.703 - 3.718: 97.9492% ( 92) 00:19:12.509 3.718 - 3.733: 98.4215% ( 76) 00:19:12.509 3.733 - 3.749: 98.6949% ( 44) 00:19:12.509 3.749 - 3.764: 98.9497% ( 41) 00:19:12.509 3.764 - 3.779: 99.1921% ( 39) 00:19:12.509 3.779 - 3.794: 99.4283% ( 38) 00:19:12.509 3.794 - 3.810: 99.5153% ( 14) 00:19:12.509 3.810 - 3.825: 99.5836% ( 11) 00:19:12.509 3.825 - 3.840: 99.6520% ( 11) 00:19:12.509 3.840 - 3.855: 99.6893% ( 6) 00:19:12.509 3.870 - 3.886: 99.6955% ( 1) 00:19:12.509 3.901 - 3.931: 99.7017% ( 1) 00:19:12.509 4.084 - 4.114: 99.7079% ( 1) 00:19:12.509 5.272 - 5.303: 99.7141% ( 1) 00:19:12.509 5.394 - 5.425: 99.7266% ( 2) 00:19:12.509 5.455 - 5.486: 99.7328% ( 1) 00:19:12.509 5.547 - 5.577: 99.7452% ( 2) 00:19:12.509 5.608 - 5.638: 99.7514% ( 1) 00:19:12.509 5.638 - 5.669: 99.7576% ( 1) 00:19:12.509 5.730 - 5.760: 99.7763% ( 3) 00:19:12.509 5.851 - 5.882: 99.7825% ( 1) 00:19:12.509 5.912 - 5.943: 99.7887% ( 1) 00:19:12.509 6.004 - 6.034: 99.7949% ( 1) 00:19:12.509 6.065 - 6.095: 99.8073% ( 2) 00:19:12.509 6.187 - 6.217: 99.8136% ( 1) 00:19:12.509 6.217 - 6.248: 99.8260% ( 2) 00:19:12.509 6.248 - 6.278: 99.8322% ( 1) 00:19:12.509 6.278 - 6.309: 99.8384% ( 1) 00:19:12.509 6.339 - 6.370: 99.8508% ( 2) 00:19:12.509 6.400 - 6.430: 99.8571% ( 1) 00:19:12.509 6.430 - 6.461: 99.8633% ( 1) 00:19:12.509 6.644 - 6.674: 99.8757% ( 2) 00:19:12.509 6.674 - 6.705: 99.8819% ( 1) 00:19:12.509 6.705 - 6.735: 99.8881% ( 1) 00:19:12.509 6.827 - 6.857: 99.8944% ( 1) 00:19:12.509 7.070 - 7.101: 99.9006% ( 1) 00:19:12.509 8.168 - 8.229: 99.9068% ( 1) 00:19:12.509 8.838 - 8.899: 99.9130% ( 1) 00:19:12.509 14.080 - 14.141: 99.9192% ( 1) 00:19:12.509 15.604 - 15.726: 99.9254% ( 1) 00:19:12.509 18.408 - 18.530: 99.9316% ( 1) 00:19:12.509 3994.575 - 4025.783: 100.0000% ( 11) 00:19:12.509 00:19:12.509 [2024-12-15 06:09:32.513966] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.509 Complete histogram 00:19:12.509 ================== 00:19:12.509 Range in us Cumulative Count 00:19:12.509 1.714 - 1.722: 0.0373% ( 6) 00:19:12.509 1.722 - 1.730: 0.2548% ( 35) 00:19:12.509 1.730 - 1.737: 0.4412% ( 30) 00:19:12.509 1.737 - 1.745: 0.5904% ( 24) 00:19:12.509 1.745 - 1.752: 0.6277% ( 6) 00:19:12.509 1.752 - 1.760: 0.6588% ( 5) 00:19:12.509 1.760 - 1.768: 1.4045% ( 120) 00:19:12.509 1.768 - 1.775: 8.8062% ( 1191) 00:19:12.509 1.775 - 1.783: 31.0608% ( 3581) 00:19:12.509 1.783 - 1.790: 53.4150% ( 3597) 00:19:12.509 1.790 - 1.798: 62.5567% ( 1471) 00:19:12.510 1.798 - 1.806: 66.1861% ( 584) 00:19:12.510 1.806 - 1.813: 68.3488% ( 348) 00:19:12.510 1.813 - 1.821: 71.5990% ( 523) 00:19:12.510 1.821 - 1.829: 78.8640% ( 1169) 00:19:12.510 1.829 - 1.836: 87.9436% ( 1461) 00:19:12.510 1.836 - 1.844: 92.7413% ( 772) 00:19:12.510 1.844 - 1.851: 95.1464% ( 387) 00:19:12.510 1.851 - 1.859: 96.6317% ( 239) 00:19:12.510 1.859 - 1.867: 97.6446% ( 163) 00:19:12.510 1.867 - 1.874: 98.1915% ( 88) 00:19:12.510 1.874 - 1.882: 98.5209% ( 53) 00:19:12.510 1.882 - 1.890: 98.7198% ( 32) 00:19:12.510 1.890 - 1.897: 98.8814% ( 26) 00:19:12.510 1.897 - 1.905: 99.0367% ( 25) 00:19:12.510 1.905 - 1.912: 99.1362% ( 16) 00:19:12.510 1.912 - 1.920: 99.2667% ( 21) 00:19:12.510 1.920 - 1.928: 99.3412% ( 12) 00:19:12.510 1.928 - 1.935: 99.3661% ( 4) 00:19:12.510 1.943 - 1.950: 99.3785% ( 2) 00:19:12.510 1.950 - 1.966: 99.4034% ( 4) 00:19:12.510 1.966 - 1.981: 99.4096% ( 1) 00:19:12.510 1.981 - 1.996: 99.4158% ( 1) 00:19:12.510 2.042 - 2.057: 99.4220% ( 1) 00:19:12.510 2.149 - 2.164: 99.4283% ( 1) 00:19:12.510 2.545 - 2.560: 99.4345% ( 1) 00:19:12.510 3.688 - 3.703: 99.4407% ( 1) 00:19:12.510 3.992 - 4.023: 99.4531% ( 2) 00:19:12.510 4.053 - 4.084: 99.4655% ( 2) 00:19:12.510 4.236 - 4.267: 99.4718% ( 1) 00:19:12.510 4.297 - 4.328: 99.4780% ( 1) 00:19:12.510 4.328 - 4.358: 99.4842% ( 1) 00:19:12.510 4.389 - 4.419: 99.4904% ( 1) 00:19:12.510 4.480 - 4.510: 99.4966% ( 1) 00:19:12.510 4.632 - 4.663: 99.5028% ( 1) 00:19:12.510 4.724 - 4.754: 99.5090% ( 1) 00:19:12.510 4.937 - 4.968: 99.5153% ( 1) 00:19:12.510 5.029 - 5.059: 99.5215% ( 1) 00:19:12.510 5.090 - 5.120: 99.5339% ( 2) 00:19:12.510 5.333 - 5.364: 99.5401% ( 1) 00:19:12.510 5.486 - 5.516: 99.5463% ( 1) 00:19:12.510 5.669 - 5.699: 99.5525% ( 1) 00:19:12.510 5.760 - 5.790: 99.5588% ( 1) 00:19:12.510 5.943 - 5.973: 99.5712% ( 2) 00:19:12.510 6.187 - 6.217: 99.5774% ( 1) 00:19:12.510 6.217 - 6.248: 99.5836% ( 1) 00:19:12.510 6.430 - 6.461: 99.5898% ( 1) 00:19:12.510 7.314 - 7.345: 99.5960% ( 1) 00:19:12.510 11.398 - 11.459: 99.6023% ( 1) 00:19:12.510 17.676 - 17.798: 99.6085% ( 1) 00:19:12.510 3994.575 - 4025.783: 100.0000% ( 63) 00:19:12.510 00:19:12.510 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:12.510 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:12.510 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:12.510 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:12.510 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:12.769 [ 00:19:12.769 { 00:19:12.769 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:12.769 "subtype": "Discovery", 00:19:12.769 "listen_addresses": [], 00:19:12.769 "allow_any_host": true, 00:19:12.769 "hosts": [] 00:19:12.769 }, 00:19:12.769 { 00:19:12.769 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:12.769 "subtype": "NVMe", 00:19:12.769 "listen_addresses": [ 00:19:12.769 { 00:19:12.769 "trtype": "VFIOUSER", 00:19:12.769 "adrfam": "IPv4", 00:19:12.769 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:12.769 "trsvcid": "0" 00:19:12.769 } 00:19:12.769 ], 00:19:12.769 "allow_any_host": true, 00:19:12.769 "hosts": [], 00:19:12.769 "serial_number": "SPDK1", 00:19:12.769 "model_number": "SPDK bdev Controller", 00:19:12.769 "max_namespaces": 32, 00:19:12.769 "min_cntlid": 1, 00:19:12.769 "max_cntlid": 65519, 00:19:12.769 "namespaces": [ 00:19:12.769 { 00:19:12.769 "nsid": 1, 00:19:12.769 "bdev_name": "Malloc1", 00:19:12.769 "name": "Malloc1", 00:19:12.769 "nguid": "2BDAF1B0F0004B7E98E075429349F85F", 00:19:12.769 "uuid": "2bdaf1b0-f000-4b7e-98e0-75429349f85f" 00:19:12.769 }, 00:19:12.769 { 00:19:12.769 "nsid": 2, 00:19:12.769 "bdev_name": "Malloc3", 00:19:12.769 "name": "Malloc3", 00:19:12.769 "nguid": "6D73AE1D79054A8980CA3C3C32D201A2", 00:19:12.769 "uuid": "6d73ae1d-7905-4a89-80ca-3c3c32d201a2" 00:19:12.769 } 00:19:12.769 ] 00:19:12.769 }, 00:19:12.769 { 00:19:12.769 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:12.769 "subtype": "NVMe", 00:19:12.769 "listen_addresses": [ 00:19:12.769 { 00:19:12.769 "trtype": "VFIOUSER", 00:19:12.769 "adrfam": "IPv4", 00:19:12.769 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:12.769 "trsvcid": "0" 00:19:12.769 } 00:19:12.769 ], 00:19:12.769 "allow_any_host": true, 00:19:12.769 "hosts": [], 00:19:12.769 "serial_number": "SPDK2", 00:19:12.769 "model_number": "SPDK bdev Controller", 00:19:12.769 "max_namespaces": 32, 00:19:12.769 "min_cntlid": 1, 00:19:12.769 "max_cntlid": 65519, 00:19:12.769 "namespaces": [ 00:19:12.769 { 00:19:12.769 "nsid": 1, 00:19:12.769 "bdev_name": "Malloc2", 00:19:12.769 "name": "Malloc2", 00:19:12.769 "nguid": "4154BDB94A1F4EA1A0C2BAB3F710548B", 00:19:12.769 "uuid": "4154bdb9-4a1f-4ea1-a0c2-bab3f710548b" 00:19:12.769 } 00:19:12.769 ] 00:19:12.769 } 00:19:12.769 ] 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=972061 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:12.769 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:13.028 [2024-12-15 06:09:32.914447] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:13.028 Malloc4 00:19:13.028 06:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:13.028 [2024-12-15 06:09:33.163340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:13.288 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:13.288 Asynchronous Event Request test 00:19:13.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:13.288 Registering asynchronous event callbacks... 00:19:13.288 Starting namespace attribute notice tests for all controllers... 00:19:13.288 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:13.288 aer_cb - Changed Namespace 00:19:13.288 Cleaning up... 00:19:13.288 [ 00:19:13.288 { 00:19:13.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:13.288 "subtype": "Discovery", 00:19:13.288 "listen_addresses": [], 00:19:13.288 "allow_any_host": true, 00:19:13.288 "hosts": [] 00:19:13.288 }, 00:19:13.288 { 00:19:13.288 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:13.288 "subtype": "NVMe", 00:19:13.288 "listen_addresses": [ 00:19:13.288 { 00:19:13.288 "trtype": "VFIOUSER", 00:19:13.288 "adrfam": "IPv4", 00:19:13.288 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:13.288 "trsvcid": "0" 00:19:13.288 } 00:19:13.288 ], 00:19:13.288 "allow_any_host": true, 00:19:13.288 "hosts": [], 00:19:13.288 "serial_number": "SPDK1", 00:19:13.288 "model_number": "SPDK bdev Controller", 00:19:13.288 "max_namespaces": 32, 00:19:13.288 "min_cntlid": 1, 00:19:13.288 "max_cntlid": 65519, 00:19:13.288 "namespaces": [ 00:19:13.288 { 00:19:13.288 "nsid": 1, 00:19:13.288 "bdev_name": "Malloc1", 00:19:13.288 "name": "Malloc1", 00:19:13.288 "nguid": "2BDAF1B0F0004B7E98E075429349F85F", 00:19:13.288 "uuid": "2bdaf1b0-f000-4b7e-98e0-75429349f85f" 00:19:13.288 }, 00:19:13.288 { 00:19:13.288 "nsid": 2, 00:19:13.288 "bdev_name": "Malloc3", 00:19:13.288 "name": "Malloc3", 00:19:13.288 "nguid": "6D73AE1D79054A8980CA3C3C32D201A2", 00:19:13.288 "uuid": "6d73ae1d-7905-4a89-80ca-3c3c32d201a2" 00:19:13.288 } 00:19:13.288 ] 00:19:13.288 }, 00:19:13.288 { 00:19:13.288 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:13.288 "subtype": "NVMe", 00:19:13.288 "listen_addresses": [ 00:19:13.288 { 00:19:13.288 "trtype": "VFIOUSER", 00:19:13.288 "adrfam": "IPv4", 00:19:13.288 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:13.288 "trsvcid": "0" 00:19:13.288 } 00:19:13.288 ], 00:19:13.288 "allow_any_host": true, 00:19:13.288 "hosts": [], 00:19:13.288 "serial_number": "SPDK2", 00:19:13.288 "model_number": "SPDK bdev Controller", 00:19:13.288 "max_namespaces": 32, 00:19:13.288 "min_cntlid": 1, 00:19:13.288 "max_cntlid": 65519, 00:19:13.288 "namespaces": [ 00:19:13.288 { 00:19:13.288 "nsid": 1, 00:19:13.288 "bdev_name": "Malloc2", 00:19:13.288 "name": "Malloc2", 00:19:13.289 "nguid": "4154BDB94A1F4EA1A0C2BAB3F710548B", 00:19:13.289 "uuid": "4154bdb9-4a1f-4ea1-a0c2-bab3f710548b" 00:19:13.289 }, 00:19:13.289 { 00:19:13.289 "nsid": 2, 00:19:13.289 "bdev_name": "Malloc4", 00:19:13.289 "name": "Malloc4", 00:19:13.289 "nguid": "3C2B87F1A898443F8D44A0E28117B9E8", 00:19:13.289 "uuid": "3c2b87f1-a898-443f-8d44-a0e28117b9e8" 00:19:13.289 } 00:19:13.289 ] 00:19:13.289 } 00:19:13.289 ] 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 972061 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 964123 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 964123 ']' 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 964123 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.289 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 964123 00:19:13.547 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.547 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 964123' 00:19:13.548 killing process with pid 964123 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 964123 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 964123 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=972277 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 972277' 00:19:13.548 Process pid: 972277 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 972277 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 972277 ']' 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.548 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:13.807 [2024-12-15 06:09:33.717684] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:13.807 [2024-12-15 06:09:33.718533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:13.807 [2024-12-15 06:09:33.718568] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.807 [2024-12-15 06:09:33.791388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.807 [2024-12-15 06:09:33.813051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.807 [2024-12-15 06:09:33.813088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.807 [2024-12-15 06:09:33.813096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.807 [2024-12-15 06:09:33.813102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.807 [2024-12-15 06:09:33.813107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.807 [2024-12-15 06:09:33.814523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.807 [2024-12-15 06:09:33.814633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.807 [2024-12-15 06:09:33.814762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.807 [2024-12-15 06:09:33.814763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.807 [2024-12-15 06:09:33.877428] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:13.807 [2024-12-15 06:09:33.878261] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:13.807 [2024-12-15 06:09:33.878485] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:13.807 [2024-12-15 06:09:33.878961] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:13.807 [2024-12-15 06:09:33.878988] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:13.807 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.807 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:13.807 06:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:15.185 06:09:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:15.185 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:15.185 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:15.185 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:15.185 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:15.185 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:15.443 Malloc1 00:19:15.443 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:15.443 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:15.701 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:15.960 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:15.960 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:15.960 06:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:16.218 Malloc2 00:19:16.218 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:16.477 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:16.477 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 972277 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 972277 ']' 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 972277 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972277 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972277' 00:19:16.736 killing process with pid 972277 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 972277 00:19:16.736 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 972277 00:19:16.995 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:16.995 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:16.995 00:19:16.995 real 0m50.688s 00:19:16.995 user 3m16.091s 00:19:16.995 sys 0m3.242s 00:19:16.995 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.995 06:09:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:16.995 ************************************ 00:19:16.995 END TEST nvmf_vfio_user 00:19:16.995 ************************************ 00:19:16.995 06:09:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:16.995 06:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.995 06:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.995 06:09:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.995 ************************************ 00:19:16.995 START TEST nvmf_vfio_user_nvme_compliance 00:19:16.995 ************************************ 00:19:16.995 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:17.254 * Looking for test storage... 00:19:17.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:17.254 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.255 --rc genhtml_branch_coverage=1 00:19:17.255 --rc genhtml_function_coverage=1 00:19:17.255 --rc genhtml_legend=1 00:19:17.255 --rc geninfo_all_blocks=1 00:19:17.255 --rc geninfo_unexecuted_blocks=1 00:19:17.255 00:19:17.255 ' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.255 --rc genhtml_branch_coverage=1 00:19:17.255 --rc genhtml_function_coverage=1 00:19:17.255 --rc genhtml_legend=1 00:19:17.255 --rc geninfo_all_blocks=1 00:19:17.255 --rc geninfo_unexecuted_blocks=1 00:19:17.255 00:19:17.255 ' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.255 --rc genhtml_branch_coverage=1 00:19:17.255 --rc genhtml_function_coverage=1 00:19:17.255 --rc genhtml_legend=1 00:19:17.255 --rc geninfo_all_blocks=1 00:19:17.255 --rc geninfo_unexecuted_blocks=1 00:19:17.255 00:19:17.255 ' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.255 --rc genhtml_branch_coverage=1 00:19:17.255 --rc genhtml_function_coverage=1 00:19:17.255 --rc genhtml_legend=1 00:19:17.255 --rc geninfo_all_blocks=1 00:19:17.255 --rc geninfo_unexecuted_blocks=1 00:19:17.255 00:19:17.255 ' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.255 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=973019 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 973019' 00:19:17.256 Process pid: 973019 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 973019 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 973019 ']' 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.256 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:17.256 [2024-12-15 06:09:37.330584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:17.256 [2024-12-15 06:09:37.330632] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.515 [2024-12-15 06:09:37.403946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.515 [2024-12-15 06:09:37.425951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.515 [2024-12-15 06:09:37.425986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.515 [2024-12-15 06:09:37.425997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.515 [2024-12-15 06:09:37.426005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.515 [2024-12-15 06:09:37.426010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.515 [2024-12-15 06:09:37.427207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.515 [2024-12-15 06:09:37.427319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.515 [2024-12-15 06:09:37.427320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.515 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.515 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:17.515 06:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.452 malloc0 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.452 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.711 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.711 06:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:18.711 00:19:18.711 00:19:18.711 CUnit - A unit testing framework for C - Version 2.1-3 00:19:18.711 http://cunit.sourceforge.net/ 00:19:18.711 00:19:18.711 00:19:18.711 Suite: nvme_compliance 00:19:18.711 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-15 06:09:38.754450] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.711 [2024-12-15 06:09:38.755779] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:18.711 [2024-12-15 06:09:38.755793] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:18.711 [2024-12-15 06:09:38.755800] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:18.711 [2024-12-15 06:09:38.757475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.711 passed 00:19:18.711 Test: admin_identify_ctrlr_verify_fused ...[2024-12-15 06:09:38.836051] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.711 [2024-12-15 06:09:38.839063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.970 passed 00:19:18.970 Test: admin_identify_ns ...[2024-12-15 06:09:38.918781] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.970 [2024-12-15 06:09:38.979007] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:18.970 [2024-12-15 06:09:38.987006] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:18.970 [2024-12-15 06:09:39.008090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.970 passed 00:19:18.970 Test: admin_get_features_mandatory_features ...[2024-12-15 06:09:39.081389] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.970 [2024-12-15 06:09:39.084413] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.229 passed 00:19:19.229 Test: admin_get_features_optional_features ...[2024-12-15 06:09:39.161947] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.229 [2024-12-15 06:09:39.164968] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.229 passed 00:19:19.229 Test: admin_set_features_number_of_queues ...[2024-12-15 06:09:39.240656] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.229 [2024-12-15 06:09:39.345087] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.488 passed 00:19:19.488 Test: admin_get_log_page_mandatory_logs ...[2024-12-15 06:09:39.420748] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.488 [2024-12-15 06:09:39.426786] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.488 passed 00:19:19.488 Test: admin_get_log_page_with_lpo ...[2024-12-15 06:09:39.501432] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.488 [2024-12-15 06:09:39.573003] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:19.488 [2024-12-15 06:09:39.586041] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.488 passed 00:19:19.747 Test: fabric_property_get ...[2024-12-15 06:09:39.658813] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.747 [2024-12-15 06:09:39.660062] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:19.747 [2024-12-15 06:09:39.661839] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.747 passed 00:19:19.747 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-15 06:09:39.739360] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.747 [2024-12-15 06:09:39.740593] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:19.747 [2024-12-15 06:09:39.742374] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.747 passed 00:19:19.747 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-15 06:09:39.819101] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.006 [2024-12-15 06:09:39.904998] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:20.006 [2024-12-15 06:09:39.921001] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:20.006 [2024-12-15 06:09:39.926090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.006 passed 00:19:20.006 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-15 06:09:39.999858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.006 [2024-12-15 06:09:40.001102] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:20.006 [2024-12-15 06:09:40.002871] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.006 passed 00:19:20.006 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-15 06:09:40.080805] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.265 [2024-12-15 06:09:40.155999] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:20.265 [2024-12-15 06:09:40.179996] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:20.265 [2024-12-15 06:09:40.185170] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.265 passed 00:19:20.265 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-15 06:09:40.261766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.265 [2024-12-15 06:09:40.263011] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:20.265 [2024-12-15 06:09:40.263037] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:20.265 [2024-12-15 06:09:40.267797] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.265 passed 00:19:20.265 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-15 06:09:40.346349] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.524 [2024-12-15 06:09:40.438999] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:20.524 [2024-12-15 06:09:40.447006] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:20.524 [2024-12-15 06:09:40.454998] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:20.524 [2024-12-15 06:09:40.463000] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:20.524 [2024-12-15 06:09:40.492094] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.524 passed 00:19:20.524 Test: admin_create_io_sq_verify_pc ...[2024-12-15 06:09:40.568052] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:20.524 [2024-12-15 06:09:40.586006] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:20.524 [2024-12-15 06:09:40.603252] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.524 passed 00:19:20.783 Test: admin_create_io_qp_max_qps ...[2024-12-15 06:09:40.681798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:21.719 [2024-12-15 06:09:41.769000] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:22.287 [2024-12-15 06:09:42.150090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.287 passed 00:19:22.287 Test: admin_create_io_sq_shared_cq ...[2024-12-15 06:09:42.222995] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:22.287 [2024-12-15 06:09:42.356006] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:22.287 [2024-12-15 06:09:42.393058] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:22.287 passed 00:19:22.287 00:19:22.287 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.287 suites 1 1 n/a 0 0 00:19:22.287 tests 18 18 18 0 0 00:19:22.287 asserts 360 360 360 0 n/a 00:19:22.287 00:19:22.287 Elapsed time = 1.497 seconds 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 973019 ']' 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973019' 00:19:22.546 killing process with pid 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 973019 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:22.546 00:19:22.546 real 0m5.589s 00:19:22.546 user 0m15.674s 00:19:22.546 sys 0m0.509s 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.546 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:22.546 ************************************ 00:19:22.546 END TEST nvmf_vfio_user_nvme_compliance 00:19:22.546 ************************************ 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.806 ************************************ 00:19:22.806 START TEST nvmf_vfio_user_fuzz 00:19:22.806 ************************************ 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:22.806 * Looking for test storage... 00:19:22.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.806 --rc genhtml_branch_coverage=1 00:19:22.806 --rc genhtml_function_coverage=1 00:19:22.806 --rc genhtml_legend=1 00:19:22.806 --rc geninfo_all_blocks=1 00:19:22.806 --rc geninfo_unexecuted_blocks=1 00:19:22.806 00:19:22.806 ' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.806 --rc genhtml_branch_coverage=1 00:19:22.806 --rc genhtml_function_coverage=1 00:19:22.806 --rc genhtml_legend=1 00:19:22.806 --rc geninfo_all_blocks=1 00:19:22.806 --rc geninfo_unexecuted_blocks=1 00:19:22.806 00:19:22.806 ' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.806 --rc genhtml_branch_coverage=1 00:19:22.806 --rc genhtml_function_coverage=1 00:19:22.806 --rc genhtml_legend=1 00:19:22.806 --rc geninfo_all_blocks=1 00:19:22.806 --rc geninfo_unexecuted_blocks=1 00:19:22.806 00:19:22.806 ' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.806 --rc genhtml_branch_coverage=1 00:19:22.806 --rc genhtml_function_coverage=1 00:19:22.806 --rc genhtml_legend=1 00:19:22.806 --rc geninfo_all_blocks=1 00:19:22.806 --rc geninfo_unexecuted_blocks=1 00:19:22.806 00:19:22.806 ' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.806 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=973976 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 973976' 00:19:22.807 Process pid: 973976 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 973976 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 973976 ']' 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.807 06:09:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:23.066 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.066 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:23.066 06:09:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 malloc0 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:24.444 06:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:56.526 Fuzzing completed. Shutting down the fuzz application 00:19:56.526 00:19:56.526 Dumping successful admin opcodes: 00:19:56.526 9, 10, 00:19:56.526 Dumping successful io opcodes: 00:19:56.526 0, 00:19:56.526 NS: 0x20000081ef00 I/O qp, Total commands completed: 1006576, total successful commands: 3947, random_seed: 3288759872 00:19:56.526 NS: 0x20000081ef00 admin qp, Total commands completed: 241488, total successful commands: 56, random_seed: 4020242368 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 973976 ']' 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973976' 00:19:56.526 killing process with pid 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 973976 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:56.526 00:19:56.526 real 0m32.167s 00:19:56.526 user 0m29.252s 00:19:56.526 sys 0m31.774s 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:56.526 ************************************ 00:19:56.526 END TEST nvmf_vfio_user_fuzz 00:19:56.526 ************************************ 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.526 ************************************ 00:19:56.526 START TEST nvmf_auth_target 00:19:56.526 ************************************ 00:19:56.526 06:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:56.526 * Looking for test storage... 00:19:56.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:56.526 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:56.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.679 --rc genhtml_branch_coverage=1 00:19:56.679 --rc genhtml_function_coverage=1 00:19:56.679 --rc genhtml_legend=1 00:19:56.679 --rc geninfo_all_blocks=1 00:19:56.679 --rc geninfo_unexecuted_blocks=1 00:19:56.679 00:19:56.679 ' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:56.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.679 --rc genhtml_branch_coverage=1 00:19:56.679 --rc genhtml_function_coverage=1 00:19:56.679 --rc genhtml_legend=1 00:19:56.679 --rc geninfo_all_blocks=1 00:19:56.679 --rc geninfo_unexecuted_blocks=1 00:19:56.679 00:19:56.679 ' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:56.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.679 --rc genhtml_branch_coverage=1 00:19:56.679 --rc genhtml_function_coverage=1 00:19:56.679 --rc genhtml_legend=1 00:19:56.679 --rc geninfo_all_blocks=1 00:19:56.679 --rc geninfo_unexecuted_blocks=1 00:19:56.679 00:19:56.679 ' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:56.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.679 --rc genhtml_branch_coverage=1 00:19:56.679 --rc genhtml_function_coverage=1 00:19:56.679 --rc genhtml_legend=1 00:19:56.679 --rc geninfo_all_blocks=1 00:19:56.679 --rc geninfo_unexecuted_blocks=1 00:19:56.679 00:19:56.679 ' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.679 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.680 06:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:00.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:00.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:00.882 Found net devices under 0000:af:00.0: cvl_0_0 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.882 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:00.883 Found net devices under 0000:af:00.1: cvl_0_1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.883 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:01.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:20:01.142 00:20:01.142 --- 10.0.0.2 ping statistics --- 00:20:01.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.142 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:20:01.142 00:20:01.142 --- 10.0.0.1 ping statistics --- 00:20:01.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.142 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=982081 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 982081 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982081 ']' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.142 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=982116 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5f948f411c377afd2c624ce53440d577c764d1e11e048680 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qPB 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5f948f411c377afd2c624ce53440d577c764d1e11e048680 0 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5f948f411c377afd2c624ce53440d577c764d1e11e048680 0 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5f948f411c377afd2c624ce53440d577c764d1e11e048680 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qPB 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qPB 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qPB 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb88fc08ca5276b3eb09d14d6574def20c6151860ba52e8bb4d5c1d1a60d0bb5 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dVK 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb88fc08ca5276b3eb09d14d6574def20c6151860ba52e8bb4d5c1d1a60d0bb5 3 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb88fc08ca5276b3eb09d14d6574def20c6151860ba52e8bb4d5c1d1a60d0bb5 3 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.402 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb88fc08ca5276b3eb09d14d6574def20c6151860ba52e8bb4d5c1d1a60d0bb5 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dVK 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dVK 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dVK 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=787b11afa6523886a9cb21b0d1e310be 00:20:01.403 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BPS 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 787b11afa6523886a9cb21b0d1e310be 1 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 787b11afa6523886a9cb21b0d1e310be 1 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=787b11afa6523886a9cb21b0d1e310be 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BPS 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BPS 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BPS 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:01.662 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=244168353097043ce5571324d619e1e94ab3adc8486101f1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3x2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 244168353097043ce5571324d619e1e94ab3adc8486101f1 2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 244168353097043ce5571324d619e1e94ab3adc8486101f1 2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=244168353097043ce5571324d619e1e94ab3adc8486101f1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3x2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3x2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3x2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e14da368fe4359eb830ae438e18a6e4813be5a40e71cc0fc 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5PN 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e14da368fe4359eb830ae438e18a6e4813be5a40e71cc0fc 2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e14da368fe4359eb830ae438e18a6e4813be5a40e71cc0fc 2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e14da368fe4359eb830ae438e18a6e4813be5a40e71cc0fc 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5PN 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5PN 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5PN 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=871030c9e7be1e833576e47d8c2575cf 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZML 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 871030c9e7be1e833576e47d8c2575cf 1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 871030c9e7be1e833576e47d8c2575cf 1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=871030c9e7be1e833576e47d8c2575cf 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZML 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZML 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZML 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2d90ce8231453140de98b682446862c0db382ee88caf5e89cade551cc7e4e24 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lOx 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2d90ce8231453140de98b682446862c0db382ee88caf5e89cade551cc7e4e24 3 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2d90ce8231453140de98b682446862c0db382ee88caf5e89cade551cc7e4e24 3 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2d90ce8231453140de98b682446862c0db382ee88caf5e89cade551cc7e4e24 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:01.663 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lOx 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lOx 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lOx 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 982081 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982081 ']' 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.922 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 982116 /var/tmp/host.sock 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982116 ']' 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:01.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:01.922 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.923 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.181 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.181 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qPB 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qPB 00:20:02.182 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qPB 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dVK ]] 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dVK 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dVK 00:20:02.441 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dVK 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BPS 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BPS 00:20:02.700 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BPS 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3x2 ]] 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3x2 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3x2 00:20:02.958 06:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3x2 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5PN 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5PN 00:20:02.958 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5PN 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZML ]] 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZML 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZML 00:20:03.217 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZML 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lOx 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lOx 00:20:03.476 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lOx 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.736 06:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.995 00:20:03.995 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.995 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.995 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.255 { 00:20:04.255 "cntlid": 1, 00:20:04.255 "qid": 0, 00:20:04.255 "state": "enabled", 00:20:04.255 "thread": "nvmf_tgt_poll_group_000", 00:20:04.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.255 "listen_address": { 00:20:04.255 "trtype": "TCP", 00:20:04.255 "adrfam": "IPv4", 00:20:04.255 "traddr": "10.0.0.2", 00:20:04.255 "trsvcid": "4420" 00:20:04.255 }, 00:20:04.255 "peer_address": { 00:20:04.255 "trtype": "TCP", 00:20:04.255 "adrfam": "IPv4", 00:20:04.255 "traddr": "10.0.0.1", 00:20:04.255 "trsvcid": "35754" 00:20:04.255 }, 00:20:04.255 "auth": { 00:20:04.255 "state": "completed", 00:20:04.255 "digest": "sha256", 00:20:04.255 "dhgroup": "null" 00:20:04.255 } 00:20:04.255 } 00:20:04.255 ]' 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.255 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:04.514 06:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.096 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.358 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:05.358 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.358 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.359 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.617 00:20:05.617 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.617 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.617 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.876 { 00:20:05.876 "cntlid": 3, 00:20:05.876 "qid": 0, 00:20:05.876 "state": "enabled", 00:20:05.876 "thread": "nvmf_tgt_poll_group_000", 00:20:05.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.876 "listen_address": { 00:20:05.876 "trtype": "TCP", 00:20:05.876 "adrfam": "IPv4", 00:20:05.876 "traddr": "10.0.0.2", 00:20:05.876 "trsvcid": "4420" 00:20:05.876 }, 00:20:05.876 "peer_address": { 00:20:05.876 "trtype": "TCP", 00:20:05.876 "adrfam": "IPv4", 00:20:05.876 "traddr": "10.0.0.1", 00:20:05.876 "trsvcid": "35782" 00:20:05.876 }, 00:20:05.876 "auth": { 00:20:05.876 "state": "completed", 00:20:05.876 "digest": "sha256", 00:20:05.876 "dhgroup": "null" 00:20:05.876 } 00:20:05.876 } 00:20:05.876 ]' 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.876 06:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.876 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.876 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.876 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.135 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:06.135 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:06.703 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.962 06:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.221 00:20:07.221 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.221 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.221 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.480 { 00:20:07.480 "cntlid": 5, 00:20:07.480 "qid": 0, 00:20:07.480 "state": "enabled", 00:20:07.480 "thread": "nvmf_tgt_poll_group_000", 00:20:07.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.480 "listen_address": { 00:20:07.480 "trtype": "TCP", 00:20:07.480 "adrfam": "IPv4", 00:20:07.480 "traddr": "10.0.0.2", 00:20:07.480 "trsvcid": "4420" 00:20:07.480 }, 00:20:07.480 "peer_address": { 00:20:07.480 "trtype": "TCP", 00:20:07.480 "adrfam": "IPv4", 00:20:07.480 "traddr": "10.0.0.1", 00:20:07.480 "trsvcid": "38800" 00:20:07.480 }, 00:20:07.480 "auth": { 00:20:07.480 "state": "completed", 00:20:07.480 "digest": "sha256", 00:20:07.480 "dhgroup": "null" 00:20:07.480 } 00:20:07.480 } 00:20:07.480 ]' 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.480 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.481 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.739 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:07.739 06:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:08.307 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.566 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.825 00:20:08.825 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.825 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.825 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.084 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.084 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.084 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.084 06:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.084 { 00:20:09.084 "cntlid": 7, 00:20:09.084 "qid": 0, 00:20:09.084 "state": "enabled", 00:20:09.084 "thread": "nvmf_tgt_poll_group_000", 00:20:09.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.084 "listen_address": { 00:20:09.084 "trtype": "TCP", 00:20:09.084 "adrfam": "IPv4", 00:20:09.084 "traddr": "10.0.0.2", 00:20:09.084 "trsvcid": "4420" 00:20:09.084 }, 00:20:09.084 "peer_address": { 00:20:09.084 "trtype": "TCP", 00:20:09.084 "adrfam": "IPv4", 00:20:09.084 "traddr": "10.0.0.1", 00:20:09.084 "trsvcid": "38830" 00:20:09.084 }, 00:20:09.084 "auth": { 00:20:09.084 "state": "completed", 00:20:09.084 "digest": "sha256", 00:20:09.084 "dhgroup": "null" 00:20:09.084 } 00:20:09.084 } 00:20:09.084 ]' 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.084 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.343 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:09.343 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.910 06:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.169 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.427 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.427 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.686 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.687 { 00:20:10.687 "cntlid": 9, 00:20:10.687 "qid": 0, 00:20:10.687 "state": "enabled", 00:20:10.687 "thread": "nvmf_tgt_poll_group_000", 00:20:10.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.687 "listen_address": { 00:20:10.687 "trtype": "TCP", 00:20:10.687 "adrfam": "IPv4", 00:20:10.687 "traddr": "10.0.0.2", 00:20:10.687 "trsvcid": "4420" 00:20:10.687 }, 00:20:10.687 "peer_address": { 00:20:10.687 "trtype": "TCP", 00:20:10.687 "adrfam": "IPv4", 00:20:10.687 "traddr": "10.0.0.1", 00:20:10.687 "trsvcid": "38848" 00:20:10.687 }, 00:20:10.687 "auth": { 00:20:10.687 "state": "completed", 00:20:10.687 "digest": "sha256", 00:20:10.687 "dhgroup": "ffdhe2048" 00:20:10.687 } 00:20:10.687 } 00:20:10.687 ]' 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.687 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.945 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:10.945 06:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.513 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.773 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.773 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.773 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.773 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.773 00:20:12.032 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.032 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.032 06:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.032 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.032 { 00:20:12.033 "cntlid": 11, 00:20:12.033 "qid": 0, 00:20:12.033 "state": "enabled", 00:20:12.033 "thread": "nvmf_tgt_poll_group_000", 00:20:12.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.033 "listen_address": { 00:20:12.033 "trtype": "TCP", 00:20:12.033 "adrfam": "IPv4", 00:20:12.033 "traddr": "10.0.0.2", 00:20:12.033 "trsvcid": "4420" 00:20:12.033 }, 00:20:12.033 "peer_address": { 00:20:12.033 "trtype": "TCP", 00:20:12.033 "adrfam": "IPv4", 00:20:12.033 "traddr": "10.0.0.1", 00:20:12.033 "trsvcid": "38862" 00:20:12.033 }, 00:20:12.033 "auth": { 00:20:12.033 "state": "completed", 00:20:12.033 "digest": "sha256", 00:20:12.033 "dhgroup": "ffdhe2048" 00:20:12.033 } 00:20:12.033 } 00:20:12.033 ]' 00:20:12.033 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.292 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.551 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:12.551 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:13.120 06:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.120 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.379 00:20:13.379 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.379 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.379 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.639 { 00:20:13.639 "cntlid": 13, 00:20:13.639 "qid": 0, 00:20:13.639 "state": "enabled", 00:20:13.639 "thread": "nvmf_tgt_poll_group_000", 00:20:13.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.639 "listen_address": { 00:20:13.639 "trtype": "TCP", 00:20:13.639 "adrfam": "IPv4", 00:20:13.639 "traddr": "10.0.0.2", 00:20:13.639 "trsvcid": "4420" 00:20:13.639 }, 00:20:13.639 "peer_address": { 00:20:13.639 "trtype": "TCP", 00:20:13.639 "adrfam": "IPv4", 00:20:13.639 "traddr": "10.0.0.1", 00:20:13.639 "trsvcid": "38896" 00:20:13.639 }, 00:20:13.639 "auth": { 00:20:13.639 "state": "completed", 00:20:13.639 "digest": "sha256", 00:20:13.639 "dhgroup": "ffdhe2048" 00:20:13.639 } 00:20:13.639 } 00:20:13.639 ]' 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.639 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.899 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.899 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.899 06:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.899 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:13.899 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.467 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.726 06:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.985 00:20:14.985 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.985 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.985 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.244 { 00:20:15.244 "cntlid": 15, 00:20:15.244 "qid": 0, 00:20:15.244 "state": "enabled", 00:20:15.244 "thread": "nvmf_tgt_poll_group_000", 00:20:15.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.244 "listen_address": { 00:20:15.244 "trtype": "TCP", 00:20:15.244 "adrfam": "IPv4", 00:20:15.244 "traddr": "10.0.0.2", 00:20:15.244 "trsvcid": "4420" 00:20:15.244 }, 00:20:15.244 "peer_address": { 00:20:15.244 "trtype": "TCP", 00:20:15.244 "adrfam": "IPv4", 00:20:15.244 "traddr": "10.0.0.1", 00:20:15.244 "trsvcid": "38934" 00:20:15.244 }, 00:20:15.244 "auth": { 00:20:15.244 "state": "completed", 00:20:15.244 "digest": "sha256", 00:20:15.244 "dhgroup": "ffdhe2048" 00:20:15.244 } 00:20:15.244 } 00:20:15.244 ]' 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.244 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.503 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:15.503 06:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.070 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.329 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.587 00:20:16.587 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.587 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.587 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.846 { 00:20:16.846 "cntlid": 17, 00:20:16.846 "qid": 0, 00:20:16.846 "state": "enabled", 00:20:16.846 "thread": "nvmf_tgt_poll_group_000", 00:20:16.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.846 "listen_address": { 00:20:16.846 "trtype": "TCP", 00:20:16.846 "adrfam": "IPv4", 00:20:16.846 "traddr": "10.0.0.2", 00:20:16.846 "trsvcid": "4420" 00:20:16.846 }, 00:20:16.846 "peer_address": { 00:20:16.846 "trtype": "TCP", 00:20:16.846 "adrfam": "IPv4", 00:20:16.846 "traddr": "10.0.0.1", 00:20:16.846 "trsvcid": "38968" 00:20:16.846 }, 00:20:16.846 "auth": { 00:20:16.846 "state": "completed", 00:20:16.846 "digest": "sha256", 00:20:16.846 "dhgroup": "ffdhe3072" 00:20:16.846 } 00:20:16.846 } 00:20:16.846 ]' 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.846 06:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.105 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:17.105 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.672 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.930 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.931 06:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.189 00:20:18.189 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.189 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.189 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.447 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.447 { 00:20:18.447 "cntlid": 19, 00:20:18.447 "qid": 0, 00:20:18.447 "state": "enabled", 00:20:18.447 "thread": "nvmf_tgt_poll_group_000", 00:20:18.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.448 "listen_address": { 00:20:18.448 "trtype": "TCP", 00:20:18.448 "adrfam": "IPv4", 00:20:18.448 "traddr": "10.0.0.2", 00:20:18.448 "trsvcid": "4420" 00:20:18.448 }, 00:20:18.448 "peer_address": { 00:20:18.448 "trtype": "TCP", 00:20:18.448 "adrfam": "IPv4", 00:20:18.448 "traddr": "10.0.0.1", 00:20:18.448 "trsvcid": "45818" 00:20:18.448 }, 00:20:18.448 "auth": { 00:20:18.448 "state": "completed", 00:20:18.448 "digest": "sha256", 00:20:18.448 "dhgroup": "ffdhe3072" 00:20:18.448 } 00:20:18.448 } 00:20:18.448 ]' 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.448 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.767 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:18.767 06:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.416 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.417 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.417 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.676 00:20:19.676 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.676 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.676 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.934 { 00:20:19.934 "cntlid": 21, 00:20:19.934 "qid": 0, 00:20:19.934 "state": "enabled", 00:20:19.934 "thread": "nvmf_tgt_poll_group_000", 00:20:19.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.934 "listen_address": { 00:20:19.934 "trtype": "TCP", 00:20:19.934 "adrfam": "IPv4", 00:20:19.934 "traddr": "10.0.0.2", 00:20:19.934 "trsvcid": "4420" 00:20:19.934 }, 00:20:19.934 "peer_address": { 00:20:19.934 "trtype": "TCP", 00:20:19.934 "adrfam": "IPv4", 00:20:19.934 "traddr": "10.0.0.1", 00:20:19.934 "trsvcid": "45832" 00:20:19.934 }, 00:20:19.934 "auth": { 00:20:19.934 "state": "completed", 00:20:19.934 "digest": "sha256", 00:20:19.934 "dhgroup": "ffdhe3072" 00:20:19.934 } 00:20:19.934 } 00:20:19.934 ]' 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.934 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.935 06:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.935 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.935 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.194 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.194 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.194 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.194 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:20.194 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.762 06:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.021 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.280 00:20:21.280 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.280 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.280 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.539 { 00:20:21.539 "cntlid": 23, 00:20:21.539 "qid": 0, 00:20:21.539 "state": "enabled", 00:20:21.539 "thread": "nvmf_tgt_poll_group_000", 00:20:21.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.539 "listen_address": { 00:20:21.539 "trtype": "TCP", 00:20:21.539 "adrfam": "IPv4", 00:20:21.539 "traddr": "10.0.0.2", 00:20:21.539 "trsvcid": "4420" 00:20:21.539 }, 00:20:21.539 "peer_address": { 00:20:21.539 "trtype": "TCP", 00:20:21.539 "adrfam": "IPv4", 00:20:21.539 "traddr": "10.0.0.1", 00:20:21.539 "trsvcid": "45856" 00:20:21.539 }, 00:20:21.539 "auth": { 00:20:21.539 "state": "completed", 00:20:21.539 "digest": "sha256", 00:20:21.539 "dhgroup": "ffdhe3072" 00:20:21.539 } 00:20:21.539 } 00:20:21.539 ]' 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.539 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.797 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:21.797 06:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.364 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.623 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.881 00:20:22.881 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.881 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.881 06:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.140 { 00:20:23.140 "cntlid": 25, 00:20:23.140 "qid": 0, 00:20:23.140 "state": "enabled", 00:20:23.140 "thread": "nvmf_tgt_poll_group_000", 00:20:23.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.140 "listen_address": { 00:20:23.140 "trtype": "TCP", 00:20:23.140 "adrfam": "IPv4", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "trsvcid": "4420" 00:20:23.140 }, 00:20:23.140 "peer_address": { 00:20:23.140 "trtype": "TCP", 00:20:23.140 "adrfam": "IPv4", 00:20:23.140 "traddr": "10.0.0.1", 00:20:23.140 "trsvcid": "45888" 00:20:23.140 }, 00:20:23.140 "auth": { 00:20:23.140 "state": "completed", 00:20:23.140 "digest": "sha256", 00:20:23.140 "dhgroup": "ffdhe4096" 00:20:23.140 } 00:20:23.140 } 00:20:23.140 ]' 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.140 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.399 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:23.399 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:23.967 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.967 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.967 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.967 06:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.967 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.967 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.967 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.967 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.226 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.484 00:20:24.484 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.484 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.484 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.743 { 00:20:24.743 "cntlid": 27, 00:20:24.743 "qid": 0, 00:20:24.743 "state": "enabled", 00:20:24.743 "thread": "nvmf_tgt_poll_group_000", 00:20:24.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.743 "listen_address": { 00:20:24.743 "trtype": "TCP", 00:20:24.743 "adrfam": "IPv4", 00:20:24.743 "traddr": "10.0.0.2", 00:20:24.743 "trsvcid": "4420" 00:20:24.743 }, 00:20:24.743 "peer_address": { 00:20:24.743 "trtype": "TCP", 00:20:24.743 "adrfam": "IPv4", 00:20:24.743 "traddr": "10.0.0.1", 00:20:24.743 "trsvcid": "45922" 00:20:24.743 }, 00:20:24.743 "auth": { 00:20:24.743 "state": "completed", 00:20:24.743 "digest": "sha256", 00:20:24.743 "dhgroup": "ffdhe4096" 00:20:24.743 } 00:20:24.743 } 00:20:24.743 ]' 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.743 06:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.002 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:25.002 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.568 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.827 06:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.086 00:20:26.086 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.086 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.086 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.345 { 00:20:26.345 "cntlid": 29, 00:20:26.345 "qid": 0, 00:20:26.345 "state": "enabled", 00:20:26.345 "thread": "nvmf_tgt_poll_group_000", 00:20:26.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.345 "listen_address": { 00:20:26.345 "trtype": "TCP", 00:20:26.345 "adrfam": "IPv4", 00:20:26.345 "traddr": "10.0.0.2", 00:20:26.345 "trsvcid": "4420" 00:20:26.345 }, 00:20:26.345 "peer_address": { 00:20:26.345 "trtype": "TCP", 00:20:26.345 "adrfam": "IPv4", 00:20:26.345 "traddr": "10.0.0.1", 00:20:26.345 "trsvcid": "45948" 00:20:26.345 }, 00:20:26.345 "auth": { 00:20:26.345 "state": "completed", 00:20:26.345 "digest": "sha256", 00:20:26.345 "dhgroup": "ffdhe4096" 00:20:26.345 } 00:20:26.345 } 00:20:26.345 ]' 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.345 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.604 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:26.604 06:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.172 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.431 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.690 00:20:27.690 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.690 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.690 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.949 { 00:20:27.949 "cntlid": 31, 00:20:27.949 "qid": 0, 00:20:27.949 "state": "enabled", 00:20:27.949 "thread": "nvmf_tgt_poll_group_000", 00:20:27.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.949 "listen_address": { 00:20:27.949 "trtype": "TCP", 00:20:27.949 "adrfam": "IPv4", 00:20:27.949 "traddr": "10.0.0.2", 00:20:27.949 "trsvcid": "4420" 00:20:27.949 }, 00:20:27.949 "peer_address": { 00:20:27.949 "trtype": "TCP", 00:20:27.949 "adrfam": "IPv4", 00:20:27.949 "traddr": "10.0.0.1", 00:20:27.949 "trsvcid": "34252" 00:20:27.949 }, 00:20:27.949 "auth": { 00:20:27.949 "state": "completed", 00:20:27.949 "digest": "sha256", 00:20:27.949 "dhgroup": "ffdhe4096" 00:20:27.949 } 00:20:27.949 } 00:20:27.949 ]' 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.949 06:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.949 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.949 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.949 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.949 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.949 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.208 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:28.208 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.776 06:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.035 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.294 00:20:29.294 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.294 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.294 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.553 { 00:20:29.553 "cntlid": 33, 00:20:29.553 "qid": 0, 00:20:29.553 "state": "enabled", 00:20:29.553 "thread": "nvmf_tgt_poll_group_000", 00:20:29.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.553 "listen_address": { 00:20:29.553 "trtype": "TCP", 00:20:29.553 "adrfam": "IPv4", 00:20:29.553 "traddr": "10.0.0.2", 00:20:29.553 "trsvcid": "4420" 00:20:29.553 }, 00:20:29.553 "peer_address": { 00:20:29.553 "trtype": "TCP", 00:20:29.553 "adrfam": "IPv4", 00:20:29.553 "traddr": "10.0.0.1", 00:20:29.553 "trsvcid": "34276" 00:20:29.553 }, 00:20:29.553 "auth": { 00:20:29.553 "state": "completed", 00:20:29.553 "digest": "sha256", 00:20:29.553 "dhgroup": "ffdhe6144" 00:20:29.553 } 00:20:29.553 } 00:20:29.553 ]' 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.553 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.812 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.812 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.812 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.812 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:29.812 06:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.379 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.638 06:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.206 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.206 { 00:20:31.206 "cntlid": 35, 00:20:31.206 "qid": 0, 00:20:31.206 "state": "enabled", 00:20:31.206 "thread": "nvmf_tgt_poll_group_000", 00:20:31.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.206 "listen_address": { 00:20:31.206 "trtype": "TCP", 00:20:31.206 "adrfam": "IPv4", 00:20:31.206 "traddr": "10.0.0.2", 00:20:31.206 "trsvcid": "4420" 00:20:31.206 }, 00:20:31.206 "peer_address": { 00:20:31.206 "trtype": "TCP", 00:20:31.206 "adrfam": "IPv4", 00:20:31.206 "traddr": "10.0.0.1", 00:20:31.206 "trsvcid": "34314" 00:20:31.206 }, 00:20:31.206 "auth": { 00:20:31.206 "state": "completed", 00:20:31.206 "digest": "sha256", 00:20:31.206 "dhgroup": "ffdhe6144" 00:20:31.206 } 00:20:31.206 } 00:20:31.206 ]' 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.206 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:31.465 06:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:32.032 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.032 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.032 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.032 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.291 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.859 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.859 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.859 { 00:20:32.859 "cntlid": 37, 00:20:32.859 "qid": 0, 00:20:32.859 "state": "enabled", 00:20:32.859 "thread": "nvmf_tgt_poll_group_000", 00:20:32.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.859 "listen_address": { 00:20:32.859 "trtype": "TCP", 00:20:32.859 "adrfam": "IPv4", 00:20:32.859 "traddr": "10.0.0.2", 00:20:32.859 "trsvcid": "4420" 00:20:32.859 }, 00:20:32.859 "peer_address": { 00:20:32.859 "trtype": "TCP", 00:20:32.859 "adrfam": "IPv4", 00:20:32.859 "traddr": "10.0.0.1", 00:20:32.859 "trsvcid": "34350" 00:20:32.859 }, 00:20:32.859 "auth": { 00:20:32.859 "state": "completed", 00:20:32.859 "digest": "sha256", 00:20:32.859 "dhgroup": "ffdhe6144" 00:20:32.859 } 00:20:32.859 } 00:20:32.859 ]' 00:20:32.860 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.860 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.860 06:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.119 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.119 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.119 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.119 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.119 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.378 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:33.378 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.946 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.947 06:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.947 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.515 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.515 { 00:20:34.515 "cntlid": 39, 00:20:34.515 "qid": 0, 00:20:34.515 "state": "enabled", 00:20:34.515 "thread": "nvmf_tgt_poll_group_000", 00:20:34.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.515 "listen_address": { 00:20:34.515 "trtype": "TCP", 00:20:34.515 "adrfam": "IPv4", 00:20:34.515 "traddr": "10.0.0.2", 00:20:34.515 "trsvcid": "4420" 00:20:34.515 }, 00:20:34.515 "peer_address": { 00:20:34.515 "trtype": "TCP", 00:20:34.515 "adrfam": "IPv4", 00:20:34.515 "traddr": "10.0.0.1", 00:20:34.515 "trsvcid": "34368" 00:20:34.515 }, 00:20:34.515 "auth": { 00:20:34.515 "state": "completed", 00:20:34.515 "digest": "sha256", 00:20:34.515 "dhgroup": "ffdhe6144" 00:20:34.515 } 00:20:34.515 } 00:20:34.515 ]' 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.515 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.774 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.774 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.774 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.774 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.774 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.041 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:35.041 06:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.610 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.870 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.870 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.870 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.870 06:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.130 00:20:36.130 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.130 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.131 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.389 { 00:20:36.389 "cntlid": 41, 00:20:36.389 "qid": 0, 00:20:36.389 "state": "enabled", 00:20:36.389 "thread": "nvmf_tgt_poll_group_000", 00:20:36.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.389 "listen_address": { 00:20:36.389 "trtype": "TCP", 00:20:36.389 "adrfam": "IPv4", 00:20:36.389 "traddr": "10.0.0.2", 00:20:36.389 "trsvcid": "4420" 00:20:36.389 }, 00:20:36.389 "peer_address": { 00:20:36.389 "trtype": "TCP", 00:20:36.389 "adrfam": "IPv4", 00:20:36.389 "traddr": "10.0.0.1", 00:20:36.389 "trsvcid": "34394" 00:20:36.389 }, 00:20:36.389 "auth": { 00:20:36.389 "state": "completed", 00:20:36.389 "digest": "sha256", 00:20:36.389 "dhgroup": "ffdhe8192" 00:20:36.389 } 00:20:36.389 } 00:20:36.389 ]' 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.389 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.648 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.648 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.648 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.648 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:36.648 06:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.215 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.474 06:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.042 00:20:38.042 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.042 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.042 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.301 { 00:20:38.301 "cntlid": 43, 00:20:38.301 "qid": 0, 00:20:38.301 "state": "enabled", 00:20:38.301 "thread": "nvmf_tgt_poll_group_000", 00:20:38.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.301 "listen_address": { 00:20:38.301 "trtype": "TCP", 00:20:38.301 "adrfam": "IPv4", 00:20:38.301 "traddr": "10.0.0.2", 00:20:38.301 "trsvcid": "4420" 00:20:38.301 }, 00:20:38.301 "peer_address": { 00:20:38.301 "trtype": "TCP", 00:20:38.301 "adrfam": "IPv4", 00:20:38.301 "traddr": "10.0.0.1", 00:20:38.301 "trsvcid": "34608" 00:20:38.301 }, 00:20:38.301 "auth": { 00:20:38.301 "state": "completed", 00:20:38.301 "digest": "sha256", 00:20:38.301 "dhgroup": "ffdhe8192" 00:20:38.301 } 00:20:38.301 } 00:20:38.301 ]' 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.301 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.559 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:38.559 06:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:39.125 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.125 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.126 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.384 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.951 00:20:39.951 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.951 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.951 06:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.951 { 00:20:39.951 "cntlid": 45, 00:20:39.951 "qid": 0, 00:20:39.951 "state": "enabled", 00:20:39.951 "thread": "nvmf_tgt_poll_group_000", 00:20:39.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.951 "listen_address": { 00:20:39.951 "trtype": "TCP", 00:20:39.951 "adrfam": "IPv4", 00:20:39.951 "traddr": "10.0.0.2", 00:20:39.951 "trsvcid": "4420" 00:20:39.951 }, 00:20:39.951 "peer_address": { 00:20:39.951 "trtype": "TCP", 00:20:39.951 "adrfam": "IPv4", 00:20:39.951 "traddr": "10.0.0.1", 00:20:39.951 "trsvcid": "34632" 00:20:39.951 }, 00:20:39.951 "auth": { 00:20:39.951 "state": "completed", 00:20:39.951 "digest": "sha256", 00:20:39.951 "dhgroup": "ffdhe8192" 00:20:39.951 } 00:20:39.951 } 00:20:39.951 ]' 00:20:39.951 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.210 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.469 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:40.469 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.036 06:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.036 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.603 00:20:41.603 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.603 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.603 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.862 { 00:20:41.862 "cntlid": 47, 00:20:41.862 "qid": 0, 00:20:41.862 "state": "enabled", 00:20:41.862 "thread": "nvmf_tgt_poll_group_000", 00:20:41.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.862 "listen_address": { 00:20:41.862 "trtype": "TCP", 00:20:41.862 "adrfam": "IPv4", 00:20:41.862 "traddr": "10.0.0.2", 00:20:41.862 "trsvcid": "4420" 00:20:41.862 }, 00:20:41.862 "peer_address": { 00:20:41.862 "trtype": "TCP", 00:20:41.862 "adrfam": "IPv4", 00:20:41.862 "traddr": "10.0.0.1", 00:20:41.862 "trsvcid": "34660" 00:20:41.862 }, 00:20:41.862 "auth": { 00:20:41.862 "state": "completed", 00:20:41.862 "digest": "sha256", 00:20:41.862 "dhgroup": "ffdhe8192" 00:20:41.862 } 00:20:41.862 } 00:20:41.862 ]' 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.862 06:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.121 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:42.121 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.688 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.947 06:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.206 00:20:43.206 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.206 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.206 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.464 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.464 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.464 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.464 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.464 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.465 { 00:20:43.465 "cntlid": 49, 00:20:43.465 "qid": 0, 00:20:43.465 "state": "enabled", 00:20:43.465 "thread": "nvmf_tgt_poll_group_000", 00:20:43.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.465 "listen_address": { 00:20:43.465 "trtype": "TCP", 00:20:43.465 "adrfam": "IPv4", 00:20:43.465 "traddr": "10.0.0.2", 00:20:43.465 "trsvcid": "4420" 00:20:43.465 }, 00:20:43.465 "peer_address": { 00:20:43.465 "trtype": "TCP", 00:20:43.465 "adrfam": "IPv4", 00:20:43.465 "traddr": "10.0.0.1", 00:20:43.465 "trsvcid": "34694" 00:20:43.465 }, 00:20:43.465 "auth": { 00:20:43.465 "state": "completed", 00:20:43.465 "digest": "sha384", 00:20:43.465 "dhgroup": "null" 00:20:43.465 } 00:20:43.465 } 00:20:43.465 ]' 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.465 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.723 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:43.723 06:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.291 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.550 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.550 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.808 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.809 { 00:20:44.809 "cntlid": 51, 00:20:44.809 "qid": 0, 00:20:44.809 "state": "enabled", 00:20:44.809 "thread": "nvmf_tgt_poll_group_000", 00:20:44.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.809 "listen_address": { 00:20:44.809 "trtype": "TCP", 00:20:44.809 "adrfam": "IPv4", 00:20:44.809 "traddr": "10.0.0.2", 00:20:44.809 "trsvcid": "4420" 00:20:44.809 }, 00:20:44.809 "peer_address": { 00:20:44.809 "trtype": "TCP", 00:20:44.809 "adrfam": "IPv4", 00:20:44.809 "traddr": "10.0.0.1", 00:20:44.809 "trsvcid": "34720" 00:20:44.809 }, 00:20:44.809 "auth": { 00:20:44.809 "state": "completed", 00:20:44.809 "digest": "sha384", 00:20:44.809 "dhgroup": "null" 00:20:44.809 } 00:20:44.809 } 00:20:44.809 ]' 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.809 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.067 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.067 06:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.067 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.067 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.067 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.326 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:45.326 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.894 06:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.894 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.894 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.894 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.153 00:20:46.153 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.153 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.153 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.411 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.412 { 00:20:46.412 "cntlid": 53, 00:20:46.412 "qid": 0, 00:20:46.412 "state": "enabled", 00:20:46.412 "thread": "nvmf_tgt_poll_group_000", 00:20:46.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.412 "listen_address": { 00:20:46.412 "trtype": "TCP", 00:20:46.412 "adrfam": "IPv4", 00:20:46.412 "traddr": "10.0.0.2", 00:20:46.412 "trsvcid": "4420" 00:20:46.412 }, 00:20:46.412 "peer_address": { 00:20:46.412 "trtype": "TCP", 00:20:46.412 "adrfam": "IPv4", 00:20:46.412 "traddr": "10.0.0.1", 00:20:46.412 "trsvcid": "34750" 00:20:46.412 }, 00:20:46.412 "auth": { 00:20:46.412 "state": "completed", 00:20:46.412 "digest": "sha384", 00:20:46.412 "dhgroup": "null" 00:20:46.412 } 00:20:46.412 } 00:20:46.412 ]' 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.412 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.670 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.670 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.670 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.670 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:46.670 06:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.238 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.496 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.755 00:20:47.755 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.755 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.755 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.014 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.014 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.014 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.014 06:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.014 { 00:20:48.014 "cntlid": 55, 00:20:48.014 "qid": 0, 00:20:48.014 "state": "enabled", 00:20:48.014 "thread": "nvmf_tgt_poll_group_000", 00:20:48.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.014 "listen_address": { 00:20:48.014 "trtype": "TCP", 00:20:48.014 "adrfam": "IPv4", 00:20:48.014 "traddr": "10.0.0.2", 00:20:48.014 "trsvcid": "4420" 00:20:48.014 }, 00:20:48.014 "peer_address": { 00:20:48.014 "trtype": "TCP", 00:20:48.014 "adrfam": "IPv4", 00:20:48.014 "traddr": "10.0.0.1", 00:20:48.014 "trsvcid": "43502" 00:20:48.014 }, 00:20:48.014 "auth": { 00:20:48.014 "state": "completed", 00:20:48.014 "digest": "sha384", 00:20:48.014 "dhgroup": "null" 00:20:48.014 } 00:20:48.014 } 00:20:48.014 ]' 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.014 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.273 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:48.273 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.840 06:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.099 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.358 00:20:49.358 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.358 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.358 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.617 { 00:20:49.617 "cntlid": 57, 00:20:49.617 "qid": 0, 00:20:49.617 "state": "enabled", 00:20:49.617 "thread": "nvmf_tgt_poll_group_000", 00:20:49.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.617 "listen_address": { 00:20:49.617 "trtype": "TCP", 00:20:49.617 "adrfam": "IPv4", 00:20:49.617 "traddr": "10.0.0.2", 00:20:49.617 "trsvcid": "4420" 00:20:49.617 }, 00:20:49.617 "peer_address": { 00:20:49.617 "trtype": "TCP", 00:20:49.617 "adrfam": "IPv4", 00:20:49.617 "traddr": "10.0.0.1", 00:20:49.617 "trsvcid": "43524" 00:20:49.617 }, 00:20:49.617 "auth": { 00:20:49.617 "state": "completed", 00:20:49.617 "digest": "sha384", 00:20:49.617 "dhgroup": "ffdhe2048" 00:20:49.617 } 00:20:49.617 } 00:20:49.617 ]' 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.617 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.876 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:49.876 06:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.444 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.703 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.961 00:20:50.961 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.961 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.961 06:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.220 { 00:20:51.220 "cntlid": 59, 00:20:51.220 "qid": 0, 00:20:51.220 "state": "enabled", 00:20:51.220 "thread": "nvmf_tgt_poll_group_000", 00:20:51.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.220 "listen_address": { 00:20:51.220 "trtype": "TCP", 00:20:51.220 "adrfam": "IPv4", 00:20:51.220 "traddr": "10.0.0.2", 00:20:51.220 "trsvcid": "4420" 00:20:51.220 }, 00:20:51.220 "peer_address": { 00:20:51.220 "trtype": "TCP", 00:20:51.220 "adrfam": "IPv4", 00:20:51.220 "traddr": "10.0.0.1", 00:20:51.220 "trsvcid": "43560" 00:20:51.220 }, 00:20:51.220 "auth": { 00:20:51.220 "state": "completed", 00:20:51.220 "digest": "sha384", 00:20:51.220 "dhgroup": "ffdhe2048" 00:20:51.220 } 00:20:51.220 } 00:20:51.220 ]' 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.220 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.221 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.479 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:51.479 06:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.047 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.306 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.565 00:20:52.565 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.565 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.565 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.823 { 00:20:52.823 "cntlid": 61, 00:20:52.823 "qid": 0, 00:20:52.823 "state": "enabled", 00:20:52.823 "thread": "nvmf_tgt_poll_group_000", 00:20:52.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.823 "listen_address": { 00:20:52.823 "trtype": "TCP", 00:20:52.823 "adrfam": "IPv4", 00:20:52.823 "traddr": "10.0.0.2", 00:20:52.823 "trsvcid": "4420" 00:20:52.823 }, 00:20:52.823 "peer_address": { 00:20:52.823 "trtype": "TCP", 00:20:52.823 "adrfam": "IPv4", 00:20:52.823 "traddr": "10.0.0.1", 00:20:52.823 "trsvcid": "43584" 00:20:52.823 }, 00:20:52.823 "auth": { 00:20:52.823 "state": "completed", 00:20:52.823 "digest": "sha384", 00:20:52.823 "dhgroup": "ffdhe2048" 00:20:52.823 } 00:20:52.823 } 00:20:52.823 ]' 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.823 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.824 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.824 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.824 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.824 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.824 06:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.082 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:53.082 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:53.649 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.649 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.649 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.649 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.649 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.650 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.650 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.650 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.908 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.909 06:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.167 00:20:54.167 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.167 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.167 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.426 { 00:20:54.426 "cntlid": 63, 00:20:54.426 "qid": 0, 00:20:54.426 "state": "enabled", 00:20:54.426 "thread": "nvmf_tgt_poll_group_000", 00:20:54.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.426 "listen_address": { 00:20:54.426 "trtype": "TCP", 00:20:54.426 "adrfam": "IPv4", 00:20:54.426 "traddr": "10.0.0.2", 00:20:54.426 "trsvcid": "4420" 00:20:54.426 }, 00:20:54.426 "peer_address": { 00:20:54.426 "trtype": "TCP", 00:20:54.426 "adrfam": "IPv4", 00:20:54.426 "traddr": "10.0.0.1", 00:20:54.426 "trsvcid": "43616" 00:20:54.426 }, 00:20:54.426 "auth": { 00:20:54.426 "state": "completed", 00:20:54.426 "digest": "sha384", 00:20:54.426 "dhgroup": "ffdhe2048" 00:20:54.426 } 00:20:54.426 } 00:20:54.426 ]' 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.426 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.685 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:54.685 06:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.252 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.510 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.769 00:20:55.769 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.769 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.769 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.769 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.769 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.770 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.770 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.028 { 00:20:56.028 "cntlid": 65, 00:20:56.028 "qid": 0, 00:20:56.028 "state": "enabled", 00:20:56.028 "thread": "nvmf_tgt_poll_group_000", 00:20:56.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.028 "listen_address": { 00:20:56.028 "trtype": "TCP", 00:20:56.028 "adrfam": "IPv4", 00:20:56.028 "traddr": "10.0.0.2", 00:20:56.028 "trsvcid": "4420" 00:20:56.028 }, 00:20:56.028 "peer_address": { 00:20:56.028 "trtype": "TCP", 00:20:56.028 "adrfam": "IPv4", 00:20:56.028 "traddr": "10.0.0.1", 00:20:56.028 "trsvcid": "43640" 00:20:56.028 }, 00:20:56.028 "auth": { 00:20:56.028 "state": "completed", 00:20:56.028 "digest": "sha384", 00:20:56.028 "dhgroup": "ffdhe3072" 00:20:56.028 } 00:20:56.028 } 00:20:56.028 ]' 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.028 06:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.028 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.028 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.028 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.288 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:56.288 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.868 06:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.147 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.147 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.461 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.461 { 00:20:57.461 "cntlid": 67, 00:20:57.461 "qid": 0, 00:20:57.461 "state": "enabled", 00:20:57.462 "thread": "nvmf_tgt_poll_group_000", 00:20:57.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.462 "listen_address": { 00:20:57.462 "trtype": "TCP", 00:20:57.462 "adrfam": "IPv4", 00:20:57.462 "traddr": "10.0.0.2", 00:20:57.462 "trsvcid": "4420" 00:20:57.462 }, 00:20:57.462 "peer_address": { 00:20:57.462 "trtype": "TCP", 00:20:57.462 "adrfam": "IPv4", 00:20:57.462 "traddr": "10.0.0.1", 00:20:57.462 "trsvcid": "34752" 00:20:57.462 }, 00:20:57.462 "auth": { 00:20:57.462 "state": "completed", 00:20:57.462 "digest": "sha384", 00:20:57.462 "dhgroup": "ffdhe3072" 00:20:57.462 } 00:20:57.462 } 00:20:57.462 ]' 00:20:57.462 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.462 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.462 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.462 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.462 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.720 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.720 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.720 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.720 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:57.720 06:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.287 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.546 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.805 00:20:58.805 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.806 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.806 06:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.064 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.065 { 00:20:59.065 "cntlid": 69, 00:20:59.065 "qid": 0, 00:20:59.065 "state": "enabled", 00:20:59.065 "thread": "nvmf_tgt_poll_group_000", 00:20:59.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.065 "listen_address": { 00:20:59.065 "trtype": "TCP", 00:20:59.065 "adrfam": "IPv4", 00:20:59.065 "traddr": "10.0.0.2", 00:20:59.065 "trsvcid": "4420" 00:20:59.065 }, 00:20:59.065 "peer_address": { 00:20:59.065 "trtype": "TCP", 00:20:59.065 "adrfam": "IPv4", 00:20:59.065 "traddr": "10.0.0.1", 00:20:59.065 "trsvcid": "34768" 00:20:59.065 }, 00:20:59.065 "auth": { 00:20:59.065 "state": "completed", 00:20:59.065 "digest": "sha384", 00:20:59.065 "dhgroup": "ffdhe3072" 00:20:59.065 } 00:20:59.065 } 00:20:59.065 ]' 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.065 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.324 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.324 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.324 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.324 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:59.324 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:20:59.893 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.893 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.894 06:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.153 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.412 00:21:00.412 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.412 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.412 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.671 { 00:21:00.671 "cntlid": 71, 00:21:00.671 "qid": 0, 00:21:00.671 "state": "enabled", 00:21:00.671 "thread": "nvmf_tgt_poll_group_000", 00:21:00.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.671 "listen_address": { 00:21:00.671 "trtype": "TCP", 00:21:00.671 "adrfam": "IPv4", 00:21:00.671 "traddr": "10.0.0.2", 00:21:00.671 "trsvcid": "4420" 00:21:00.671 }, 00:21:00.671 "peer_address": { 00:21:00.671 "trtype": "TCP", 00:21:00.671 "adrfam": "IPv4", 00:21:00.671 "traddr": "10.0.0.1", 00:21:00.671 "trsvcid": "34790" 00:21:00.671 }, 00:21:00.671 "auth": { 00:21:00.671 "state": "completed", 00:21:00.671 "digest": "sha384", 00:21:00.671 "dhgroup": "ffdhe3072" 00:21:00.671 } 00:21:00.671 } 00:21:00.671 ]' 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.671 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.930 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.930 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.930 06:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.930 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:00.930 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.498 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.757 06:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.015 00:21:02.015 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.015 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.015 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.274 { 00:21:02.274 "cntlid": 73, 00:21:02.274 "qid": 0, 00:21:02.274 "state": "enabled", 00:21:02.274 "thread": "nvmf_tgt_poll_group_000", 00:21:02.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.274 "listen_address": { 00:21:02.274 "trtype": "TCP", 00:21:02.274 "adrfam": "IPv4", 00:21:02.274 "traddr": "10.0.0.2", 00:21:02.274 "trsvcid": "4420" 00:21:02.274 }, 00:21:02.274 "peer_address": { 00:21:02.274 "trtype": "TCP", 00:21:02.274 "adrfam": "IPv4", 00:21:02.274 "traddr": "10.0.0.1", 00:21:02.274 "trsvcid": "34802" 00:21:02.274 }, 00:21:02.274 "auth": { 00:21:02.274 "state": "completed", 00:21:02.274 "digest": "sha384", 00:21:02.274 "dhgroup": "ffdhe4096" 00:21:02.274 } 00:21:02.274 } 00:21:02.274 ]' 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.274 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.533 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:02.534 06:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.102 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.360 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.361 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.619 00:21:03.619 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.619 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.619 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.877 { 00:21:03.877 "cntlid": 75, 00:21:03.877 "qid": 0, 00:21:03.877 "state": "enabled", 00:21:03.877 "thread": "nvmf_tgt_poll_group_000", 00:21:03.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.877 "listen_address": { 00:21:03.877 "trtype": "TCP", 00:21:03.877 "adrfam": "IPv4", 00:21:03.877 "traddr": "10.0.0.2", 00:21:03.877 "trsvcid": "4420" 00:21:03.877 }, 00:21:03.877 "peer_address": { 00:21:03.877 "trtype": "TCP", 00:21:03.877 "adrfam": "IPv4", 00:21:03.877 "traddr": "10.0.0.1", 00:21:03.877 "trsvcid": "34832" 00:21:03.877 }, 00:21:03.877 "auth": { 00:21:03.877 "state": "completed", 00:21:03.877 "digest": "sha384", 00:21:03.877 "dhgroup": "ffdhe4096" 00:21:03.877 } 00:21:03.877 } 00:21:03.877 ]' 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.877 06:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.877 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.877 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.877 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.136 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:04.136 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.702 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.961 06:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.961 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.961 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.961 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.961 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.219 00:21:05.219 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.219 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.219 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.478 { 00:21:05.478 "cntlid": 77, 00:21:05.478 "qid": 0, 00:21:05.478 "state": "enabled", 00:21:05.478 "thread": "nvmf_tgt_poll_group_000", 00:21:05.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.478 "listen_address": { 00:21:05.478 "trtype": "TCP", 00:21:05.478 "adrfam": "IPv4", 00:21:05.478 "traddr": "10.0.0.2", 00:21:05.478 "trsvcid": "4420" 00:21:05.478 }, 00:21:05.478 "peer_address": { 00:21:05.478 "trtype": "TCP", 00:21:05.478 "adrfam": "IPv4", 00:21:05.478 "traddr": "10.0.0.1", 00:21:05.478 "trsvcid": "34856" 00:21:05.478 }, 00:21:05.478 "auth": { 00:21:05.478 "state": "completed", 00:21:05.478 "digest": "sha384", 00:21:05.478 "dhgroup": "ffdhe4096" 00:21:05.478 } 00:21:05.478 } 00:21:05.478 ]' 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.478 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.737 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.737 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.737 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.737 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:05.737 06:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.304 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.563 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.822 00:21:06.822 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.822 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.822 06:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.081 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.081 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.081 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.081 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.082 { 00:21:07.082 "cntlid": 79, 00:21:07.082 "qid": 0, 00:21:07.082 "state": "enabled", 00:21:07.082 "thread": "nvmf_tgt_poll_group_000", 00:21:07.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.082 "listen_address": { 00:21:07.082 "trtype": "TCP", 00:21:07.082 "adrfam": "IPv4", 00:21:07.082 "traddr": "10.0.0.2", 00:21:07.082 "trsvcid": "4420" 00:21:07.082 }, 00:21:07.082 "peer_address": { 00:21:07.082 "trtype": "TCP", 00:21:07.082 "adrfam": "IPv4", 00:21:07.082 "traddr": "10.0.0.1", 00:21:07.082 "trsvcid": "34878" 00:21:07.082 }, 00:21:07.082 "auth": { 00:21:07.082 "state": "completed", 00:21:07.082 "digest": "sha384", 00:21:07.082 "dhgroup": "ffdhe4096" 00:21:07.082 } 00:21:07.082 } 00:21:07.082 ]' 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.082 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.340 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:07.340 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.908 06:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.167 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.426 00:21:08.426 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.426 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.426 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.685 { 00:21:08.685 "cntlid": 81, 00:21:08.685 "qid": 0, 00:21:08.685 "state": "enabled", 00:21:08.685 "thread": "nvmf_tgt_poll_group_000", 00:21:08.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.685 "listen_address": { 00:21:08.685 "trtype": "TCP", 00:21:08.685 "adrfam": "IPv4", 00:21:08.685 "traddr": "10.0.0.2", 00:21:08.685 "trsvcid": "4420" 00:21:08.685 }, 00:21:08.685 "peer_address": { 00:21:08.685 "trtype": "TCP", 00:21:08.685 "adrfam": "IPv4", 00:21:08.685 "traddr": "10.0.0.1", 00:21:08.685 "trsvcid": "38184" 00:21:08.685 }, 00:21:08.685 "auth": { 00:21:08.685 "state": "completed", 00:21:08.685 "digest": "sha384", 00:21:08.685 "dhgroup": "ffdhe6144" 00:21:08.685 } 00:21:08.685 } 00:21:08.685 ]' 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.685 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.943 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.943 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.943 06:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.944 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:08.944 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.511 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.770 06:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.338 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.338 { 00:21:10.338 "cntlid": 83, 00:21:10.338 "qid": 0, 00:21:10.338 "state": "enabled", 00:21:10.338 "thread": "nvmf_tgt_poll_group_000", 00:21:10.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.338 "listen_address": { 00:21:10.338 "trtype": "TCP", 00:21:10.338 "adrfam": "IPv4", 00:21:10.338 "traddr": "10.0.0.2", 00:21:10.338 "trsvcid": "4420" 00:21:10.338 }, 00:21:10.338 "peer_address": { 00:21:10.338 "trtype": "TCP", 00:21:10.338 "adrfam": "IPv4", 00:21:10.338 "traddr": "10.0.0.1", 00:21:10.338 "trsvcid": "38216" 00:21:10.338 }, 00:21:10.338 "auth": { 00:21:10.338 "state": "completed", 00:21:10.338 "digest": "sha384", 00:21:10.338 "dhgroup": "ffdhe6144" 00:21:10.338 } 00:21:10.338 } 00:21:10.338 ]' 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.338 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.597 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.597 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.597 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.597 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.597 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.856 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:10.856 06:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.423 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.424 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.991 00:21:11.991 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.991 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.991 06:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.991 { 00:21:11.991 "cntlid": 85, 00:21:11.991 "qid": 0, 00:21:11.991 "state": "enabled", 00:21:11.991 "thread": "nvmf_tgt_poll_group_000", 00:21:11.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.991 "listen_address": { 00:21:11.991 "trtype": "TCP", 00:21:11.991 "adrfam": "IPv4", 00:21:11.991 "traddr": "10.0.0.2", 00:21:11.991 "trsvcid": "4420" 00:21:11.991 }, 00:21:11.991 "peer_address": { 00:21:11.991 "trtype": "TCP", 00:21:11.991 "adrfam": "IPv4", 00:21:11.991 "traddr": "10.0.0.1", 00:21:11.991 "trsvcid": "38234" 00:21:11.991 }, 00:21:11.991 "auth": { 00:21:11.991 "state": "completed", 00:21:11.991 "digest": "sha384", 00:21:11.991 "dhgroup": "ffdhe6144" 00:21:11.991 } 00:21:11.991 } 00:21:11.991 ]' 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.991 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.250 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.250 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.250 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.250 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.250 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.508 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:12.508 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.077 06:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.077 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.644 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.644 { 00:21:13.644 "cntlid": 87, 00:21:13.644 "qid": 0, 00:21:13.644 "state": "enabled", 00:21:13.644 "thread": "nvmf_tgt_poll_group_000", 00:21:13.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.644 "listen_address": { 00:21:13.644 "trtype": "TCP", 00:21:13.644 "adrfam": "IPv4", 00:21:13.644 "traddr": "10.0.0.2", 00:21:13.644 "trsvcid": "4420" 00:21:13.644 }, 00:21:13.644 "peer_address": { 00:21:13.644 "trtype": "TCP", 00:21:13.644 "adrfam": "IPv4", 00:21:13.644 "traddr": "10.0.0.1", 00:21:13.644 "trsvcid": "38252" 00:21:13.644 }, 00:21:13.644 "auth": { 00:21:13.644 "state": "completed", 00:21:13.644 "digest": "sha384", 00:21:13.644 "dhgroup": "ffdhe6144" 00:21:13.644 } 00:21:13.644 } 00:21:13.644 ]' 00:21:13.644 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.903 06:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.162 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:14.162 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.730 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.989 06:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.247 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.505 { 00:21:15.505 "cntlid": 89, 00:21:15.505 "qid": 0, 00:21:15.505 "state": "enabled", 00:21:15.505 "thread": "nvmf_tgt_poll_group_000", 00:21:15.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.505 "listen_address": { 00:21:15.505 "trtype": "TCP", 00:21:15.505 "adrfam": "IPv4", 00:21:15.505 "traddr": "10.0.0.2", 00:21:15.505 "trsvcid": "4420" 00:21:15.505 }, 00:21:15.505 "peer_address": { 00:21:15.505 "trtype": "TCP", 00:21:15.505 "adrfam": "IPv4", 00:21:15.505 "traddr": "10.0.0.1", 00:21:15.505 "trsvcid": "38282" 00:21:15.505 }, 00:21:15.505 "auth": { 00:21:15.505 "state": "completed", 00:21:15.505 "digest": "sha384", 00:21:15.505 "dhgroup": "ffdhe8192" 00:21:15.505 } 00:21:15.505 } 00:21:15.505 ]' 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.505 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.763 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.763 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.763 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.763 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.763 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.022 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:16.022 06:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:16.590 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.590 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.591 06:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.158 00:21:17.158 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.158 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.158 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.417 { 00:21:17.417 "cntlid": 91, 00:21:17.417 "qid": 0, 00:21:17.417 "state": "enabled", 00:21:17.417 "thread": "nvmf_tgt_poll_group_000", 00:21:17.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.417 "listen_address": { 00:21:17.417 "trtype": "TCP", 00:21:17.417 "adrfam": "IPv4", 00:21:17.417 "traddr": "10.0.0.2", 00:21:17.417 "trsvcid": "4420" 00:21:17.417 }, 00:21:17.417 "peer_address": { 00:21:17.417 "trtype": "TCP", 00:21:17.417 "adrfam": "IPv4", 00:21:17.417 "traddr": "10.0.0.1", 00:21:17.417 "trsvcid": "51494" 00:21:17.417 }, 00:21:17.417 "auth": { 00:21:17.417 "state": "completed", 00:21:17.417 "digest": "sha384", 00:21:17.417 "dhgroup": "ffdhe8192" 00:21:17.417 } 00:21:17.417 } 00:21:17.417 ]' 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.417 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.676 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:17.676 06:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.244 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.503 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.071 00:21:19.071 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.071 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.071 06:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.071 { 00:21:19.071 "cntlid": 93, 00:21:19.071 "qid": 0, 00:21:19.071 "state": "enabled", 00:21:19.071 "thread": "nvmf_tgt_poll_group_000", 00:21:19.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.071 "listen_address": { 00:21:19.071 "trtype": "TCP", 00:21:19.071 "adrfam": "IPv4", 00:21:19.071 "traddr": "10.0.0.2", 00:21:19.071 "trsvcid": "4420" 00:21:19.071 }, 00:21:19.071 "peer_address": { 00:21:19.071 "trtype": "TCP", 00:21:19.071 "adrfam": "IPv4", 00:21:19.071 "traddr": "10.0.0.1", 00:21:19.071 "trsvcid": "51512" 00:21:19.071 }, 00:21:19.071 "auth": { 00:21:19.071 "state": "completed", 00:21:19.071 "digest": "sha384", 00:21:19.071 "dhgroup": "ffdhe8192" 00:21:19.071 } 00:21:19.071 } 00:21:19.071 ]' 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.071 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.330 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.330 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.330 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.330 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.330 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.589 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:19.589 06:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.157 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.725 00:21:20.725 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.725 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.725 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.984 { 00:21:20.984 "cntlid": 95, 00:21:20.984 "qid": 0, 00:21:20.984 "state": "enabled", 00:21:20.984 "thread": "nvmf_tgt_poll_group_000", 00:21:20.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.984 "listen_address": { 00:21:20.984 "trtype": "TCP", 00:21:20.984 "adrfam": "IPv4", 00:21:20.984 "traddr": "10.0.0.2", 00:21:20.984 "trsvcid": "4420" 00:21:20.984 }, 00:21:20.984 "peer_address": { 00:21:20.984 "trtype": "TCP", 00:21:20.984 "adrfam": "IPv4", 00:21:20.984 "traddr": "10.0.0.1", 00:21:20.984 "trsvcid": "51532" 00:21:20.984 }, 00:21:20.984 "auth": { 00:21:20.984 "state": "completed", 00:21:20.984 "digest": "sha384", 00:21:20.984 "dhgroup": "ffdhe8192" 00:21:20.984 } 00:21:20.984 } 00:21:20.984 ]' 00:21:20.984 06:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.984 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.243 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:21.243 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.811 06:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.070 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.329 00:21:22.329 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.329 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.329 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.588 { 00:21:22.588 "cntlid": 97, 00:21:22.588 "qid": 0, 00:21:22.588 "state": "enabled", 00:21:22.588 "thread": "nvmf_tgt_poll_group_000", 00:21:22.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.588 "listen_address": { 00:21:22.588 "trtype": "TCP", 00:21:22.588 "adrfam": "IPv4", 00:21:22.588 "traddr": "10.0.0.2", 00:21:22.588 "trsvcid": "4420" 00:21:22.588 }, 00:21:22.588 "peer_address": { 00:21:22.588 "trtype": "TCP", 00:21:22.588 "adrfam": "IPv4", 00:21:22.588 "traddr": "10.0.0.1", 00:21:22.588 "trsvcid": "51558" 00:21:22.588 }, 00:21:22.588 "auth": { 00:21:22.588 "state": "completed", 00:21:22.588 "digest": "sha512", 00:21:22.588 "dhgroup": "null" 00:21:22.588 } 00:21:22.588 } 00:21:22.588 ]' 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.588 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.847 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:22.847 06:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.415 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.674 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.933 00:21:23.933 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.933 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.933 06:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.192 { 00:21:24.192 "cntlid": 99, 00:21:24.192 "qid": 0, 00:21:24.192 "state": "enabled", 00:21:24.192 "thread": "nvmf_tgt_poll_group_000", 00:21:24.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.192 "listen_address": { 00:21:24.192 "trtype": "TCP", 00:21:24.192 "adrfam": "IPv4", 00:21:24.192 "traddr": "10.0.0.2", 00:21:24.192 "trsvcid": "4420" 00:21:24.192 }, 00:21:24.192 "peer_address": { 00:21:24.192 "trtype": "TCP", 00:21:24.192 "adrfam": "IPv4", 00:21:24.192 "traddr": "10.0.0.1", 00:21:24.192 "trsvcid": "51588" 00:21:24.192 }, 00:21:24.192 "auth": { 00:21:24.192 "state": "completed", 00:21:24.192 "digest": "sha512", 00:21:24.192 "dhgroup": "null" 00:21:24.192 } 00:21:24.192 } 00:21:24.192 ]' 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.192 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.451 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:24.451 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.018 06:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.276 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.535 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.535 { 00:21:25.535 "cntlid": 101, 00:21:25.535 "qid": 0, 00:21:25.535 "state": "enabled", 00:21:25.535 "thread": "nvmf_tgt_poll_group_000", 00:21:25.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.535 "listen_address": { 00:21:25.535 "trtype": "TCP", 00:21:25.535 "adrfam": "IPv4", 00:21:25.535 "traddr": "10.0.0.2", 00:21:25.535 "trsvcid": "4420" 00:21:25.535 }, 00:21:25.535 "peer_address": { 00:21:25.535 "trtype": "TCP", 00:21:25.535 "adrfam": "IPv4", 00:21:25.535 "traddr": "10.0.0.1", 00:21:25.535 "trsvcid": "51620" 00:21:25.535 }, 00:21:25.535 "auth": { 00:21:25.535 "state": "completed", 00:21:25.535 "digest": "sha512", 00:21:25.535 "dhgroup": "null" 00:21:25.535 } 00:21:25.535 } 00:21:25.535 ]' 00:21:25.535 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.793 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.794 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.052 06:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:26.052 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.619 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.878 06:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.878 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.137 { 00:21:27.137 "cntlid": 103, 00:21:27.137 "qid": 0, 00:21:27.137 "state": "enabled", 00:21:27.137 "thread": "nvmf_tgt_poll_group_000", 00:21:27.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.137 "listen_address": { 00:21:27.137 "trtype": "TCP", 00:21:27.137 "adrfam": "IPv4", 00:21:27.137 "traddr": "10.0.0.2", 00:21:27.137 "trsvcid": "4420" 00:21:27.137 }, 00:21:27.137 "peer_address": { 00:21:27.137 "trtype": "TCP", 00:21:27.137 "adrfam": "IPv4", 00:21:27.137 "traddr": "10.0.0.1", 00:21:27.137 "trsvcid": "45690" 00:21:27.137 }, 00:21:27.137 "auth": { 00:21:27.137 "state": "completed", 00:21:27.137 "digest": "sha512", 00:21:27.137 "dhgroup": "null" 00:21:27.137 } 00:21:27.137 } 00:21:27.137 ]' 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.137 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.396 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:27.396 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.396 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.396 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.396 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.655 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:27.655 06:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.223 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.482 00:21:28.482 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.482 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.482 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.741 { 00:21:28.741 "cntlid": 105, 00:21:28.741 "qid": 0, 00:21:28.741 "state": "enabled", 00:21:28.741 "thread": "nvmf_tgt_poll_group_000", 00:21:28.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.741 "listen_address": { 00:21:28.741 "trtype": "TCP", 00:21:28.741 "adrfam": "IPv4", 00:21:28.741 "traddr": "10.0.0.2", 00:21:28.741 "trsvcid": "4420" 00:21:28.741 }, 00:21:28.741 "peer_address": { 00:21:28.741 "trtype": "TCP", 00:21:28.741 "adrfam": "IPv4", 00:21:28.741 "traddr": "10.0.0.1", 00:21:28.741 "trsvcid": "45726" 00:21:28.741 }, 00:21:28.741 "auth": { 00:21:28.741 "state": "completed", 00:21:28.741 "digest": "sha512", 00:21:28.741 "dhgroup": "ffdhe2048" 00:21:28.741 } 00:21:28.741 } 00:21:28.741 ]' 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.741 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.000 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.000 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.000 06:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.000 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:29.000 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.567 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.826 06:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.084 00:21:30.084 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.084 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.084 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.343 { 00:21:30.343 "cntlid": 107, 00:21:30.343 "qid": 0, 00:21:30.343 "state": "enabled", 00:21:30.343 "thread": "nvmf_tgt_poll_group_000", 00:21:30.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.343 "listen_address": { 00:21:30.343 "trtype": "TCP", 00:21:30.343 "adrfam": "IPv4", 00:21:30.343 "traddr": "10.0.0.2", 00:21:30.343 "trsvcid": "4420" 00:21:30.343 }, 00:21:30.343 "peer_address": { 00:21:30.343 "trtype": "TCP", 00:21:30.343 "adrfam": "IPv4", 00:21:30.343 "traddr": "10.0.0.1", 00:21:30.343 "trsvcid": "45746" 00:21:30.343 }, 00:21:30.343 "auth": { 00:21:30.343 "state": "completed", 00:21:30.343 "digest": "sha512", 00:21:30.343 "dhgroup": "ffdhe2048" 00:21:30.343 } 00:21:30.343 } 00:21:30.343 ]' 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.343 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.602 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.602 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.602 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.602 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:30.602 06:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.169 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.427 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.686 00:21:31.686 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.686 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.686 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.945 { 00:21:31.945 "cntlid": 109, 00:21:31.945 "qid": 0, 00:21:31.945 "state": "enabled", 00:21:31.945 "thread": "nvmf_tgt_poll_group_000", 00:21:31.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.945 "listen_address": { 00:21:31.945 "trtype": "TCP", 00:21:31.945 "adrfam": "IPv4", 00:21:31.945 "traddr": "10.0.0.2", 00:21:31.945 "trsvcid": "4420" 00:21:31.945 }, 00:21:31.945 "peer_address": { 00:21:31.945 "trtype": "TCP", 00:21:31.945 "adrfam": "IPv4", 00:21:31.945 "traddr": "10.0.0.1", 00:21:31.945 "trsvcid": "45756" 00:21:31.945 }, 00:21:31.945 "auth": { 00:21:31.945 "state": "completed", 00:21:31.945 "digest": "sha512", 00:21:31.945 "dhgroup": "ffdhe2048" 00:21:31.945 } 00:21:31.945 } 00:21:31.945 ]' 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.945 06:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.945 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.945 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.945 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.945 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.945 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.204 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:32.204 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.772 06:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.031 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.290 00:21:33.290 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.290 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.290 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.548 { 00:21:33.548 "cntlid": 111, 00:21:33.548 "qid": 0, 00:21:33.548 "state": "enabled", 00:21:33.548 "thread": "nvmf_tgt_poll_group_000", 00:21:33.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.548 "listen_address": { 00:21:33.548 "trtype": "TCP", 00:21:33.548 "adrfam": "IPv4", 00:21:33.548 "traddr": "10.0.0.2", 00:21:33.548 "trsvcid": "4420" 00:21:33.548 }, 00:21:33.548 "peer_address": { 00:21:33.548 "trtype": "TCP", 00:21:33.548 "adrfam": "IPv4", 00:21:33.548 "traddr": "10.0.0.1", 00:21:33.548 "trsvcid": "45802" 00:21:33.548 }, 00:21:33.548 "auth": { 00:21:33.548 "state": "completed", 00:21:33.548 "digest": "sha512", 00:21:33.548 "dhgroup": "ffdhe2048" 00:21:33.548 } 00:21:33.548 } 00:21:33.548 ]' 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.548 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.807 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:33.807 06:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.375 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.740 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.055 00:21:35.055 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.055 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.055 06:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.055 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.055 { 00:21:35.055 "cntlid": 113, 00:21:35.056 "qid": 0, 00:21:35.056 "state": "enabled", 00:21:35.056 "thread": "nvmf_tgt_poll_group_000", 00:21:35.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.056 "listen_address": { 00:21:35.056 "trtype": "TCP", 00:21:35.056 "adrfam": "IPv4", 00:21:35.056 "traddr": "10.0.0.2", 00:21:35.056 "trsvcid": "4420" 00:21:35.056 }, 00:21:35.056 "peer_address": { 00:21:35.056 "trtype": "TCP", 00:21:35.056 "adrfam": "IPv4", 00:21:35.056 "traddr": "10.0.0.1", 00:21:35.056 "trsvcid": "45826" 00:21:35.056 }, 00:21:35.056 "auth": { 00:21:35.056 "state": "completed", 00:21:35.056 "digest": "sha512", 00:21:35.056 "dhgroup": "ffdhe3072" 00:21:35.056 } 00:21:35.056 } 00:21:35.056 ]' 00:21:35.056 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.056 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.056 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.056 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:35.314 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.881 06:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.140 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.399 00:21:36.399 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.399 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.399 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.658 { 00:21:36.658 "cntlid": 115, 00:21:36.658 "qid": 0, 00:21:36.658 "state": "enabled", 00:21:36.658 "thread": "nvmf_tgt_poll_group_000", 00:21:36.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.658 "listen_address": { 00:21:36.658 "trtype": "TCP", 00:21:36.658 "adrfam": "IPv4", 00:21:36.658 "traddr": "10.0.0.2", 00:21:36.658 "trsvcid": "4420" 00:21:36.658 }, 00:21:36.658 "peer_address": { 00:21:36.658 "trtype": "TCP", 00:21:36.658 "adrfam": "IPv4", 00:21:36.658 "traddr": "10.0.0.1", 00:21:36.658 "trsvcid": "45844" 00:21:36.658 }, 00:21:36.658 "auth": { 00:21:36.658 "state": "completed", 00:21:36.658 "digest": "sha512", 00:21:36.658 "dhgroup": "ffdhe3072" 00:21:36.658 } 00:21:36.658 } 00:21:36.658 ]' 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.658 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.917 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:36.917 06:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.483 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.742 06:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.001 00:21:38.001 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.001 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.001 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.260 { 00:21:38.260 "cntlid": 117, 00:21:38.260 "qid": 0, 00:21:38.260 "state": "enabled", 00:21:38.260 "thread": "nvmf_tgt_poll_group_000", 00:21:38.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.260 "listen_address": { 00:21:38.260 "trtype": "TCP", 00:21:38.260 "adrfam": "IPv4", 00:21:38.260 "traddr": "10.0.0.2", 00:21:38.260 "trsvcid": "4420" 00:21:38.260 }, 00:21:38.260 "peer_address": { 00:21:38.260 "trtype": "TCP", 00:21:38.260 "adrfam": "IPv4", 00:21:38.260 "traddr": "10.0.0.1", 00:21:38.260 "trsvcid": "60706" 00:21:38.260 }, 00:21:38.260 "auth": { 00:21:38.260 "state": "completed", 00:21:38.260 "digest": "sha512", 00:21:38.260 "dhgroup": "ffdhe3072" 00:21:38.260 } 00:21:38.260 } 00:21:38.260 ]' 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.260 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.519 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:38.519 06:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.086 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.344 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.345 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.603 00:21:39.603 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.603 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.603 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.862 { 00:21:39.862 "cntlid": 119, 00:21:39.862 "qid": 0, 00:21:39.862 "state": "enabled", 00:21:39.862 "thread": "nvmf_tgt_poll_group_000", 00:21:39.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.862 "listen_address": { 00:21:39.862 "trtype": "TCP", 00:21:39.862 "adrfam": "IPv4", 00:21:39.862 "traddr": "10.0.0.2", 00:21:39.862 "trsvcid": "4420" 00:21:39.862 }, 00:21:39.862 "peer_address": { 00:21:39.862 "trtype": "TCP", 00:21:39.862 "adrfam": "IPv4", 00:21:39.862 "traddr": "10.0.0.1", 00:21:39.862 "trsvcid": "60722" 00:21:39.862 }, 00:21:39.862 "auth": { 00:21:39.862 "state": "completed", 00:21:39.862 "digest": "sha512", 00:21:39.862 "dhgroup": "ffdhe3072" 00:21:39.862 } 00:21:39.862 } 00:21:39.862 ]' 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.862 06:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.121 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:40.121 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.688 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.947 06:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.205 00:21:41.205 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.205 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.205 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.463 { 00:21:41.463 "cntlid": 121, 00:21:41.463 "qid": 0, 00:21:41.463 "state": "enabled", 00:21:41.463 "thread": "nvmf_tgt_poll_group_000", 00:21:41.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.463 "listen_address": { 00:21:41.463 "trtype": "TCP", 00:21:41.463 "adrfam": "IPv4", 00:21:41.463 "traddr": "10.0.0.2", 00:21:41.463 "trsvcid": "4420" 00:21:41.463 }, 00:21:41.463 "peer_address": { 00:21:41.463 "trtype": "TCP", 00:21:41.463 "adrfam": "IPv4", 00:21:41.463 "traddr": "10.0.0.1", 00:21:41.463 "trsvcid": "60758" 00:21:41.463 }, 00:21:41.463 "auth": { 00:21:41.463 "state": "completed", 00:21:41.463 "digest": "sha512", 00:21:41.463 "dhgroup": "ffdhe4096" 00:21:41.463 } 00:21:41.463 } 00:21:41.463 ]' 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.463 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.722 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:41.722 06:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.289 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.548 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.807 00:21:42.807 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.807 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.807 06:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.065 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.065 { 00:21:43.065 "cntlid": 123, 00:21:43.065 "qid": 0, 00:21:43.066 "state": "enabled", 00:21:43.066 "thread": "nvmf_tgt_poll_group_000", 00:21:43.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:43.066 "listen_address": { 00:21:43.066 "trtype": "TCP", 00:21:43.066 "adrfam": "IPv4", 00:21:43.066 "traddr": "10.0.0.2", 00:21:43.066 "trsvcid": "4420" 00:21:43.066 }, 00:21:43.066 "peer_address": { 00:21:43.066 "trtype": "TCP", 00:21:43.066 "adrfam": "IPv4", 00:21:43.066 "traddr": "10.0.0.1", 00:21:43.066 "trsvcid": "60800" 00:21:43.066 }, 00:21:43.066 "auth": { 00:21:43.066 "state": "completed", 00:21:43.066 "digest": "sha512", 00:21:43.066 "dhgroup": "ffdhe4096" 00:21:43.066 } 00:21:43.066 } 00:21:43.066 ]' 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.066 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.325 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:43.325 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.892 06:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.151 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.409 00:21:44.409 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.409 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.409 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.668 { 00:21:44.668 "cntlid": 125, 00:21:44.668 "qid": 0, 00:21:44.668 "state": "enabled", 00:21:44.668 "thread": "nvmf_tgt_poll_group_000", 00:21:44.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.668 "listen_address": { 00:21:44.668 "trtype": "TCP", 00:21:44.668 "adrfam": "IPv4", 00:21:44.668 "traddr": "10.0.0.2", 00:21:44.668 "trsvcid": "4420" 00:21:44.668 }, 00:21:44.668 "peer_address": { 00:21:44.668 "trtype": "TCP", 00:21:44.668 "adrfam": "IPv4", 00:21:44.668 "traddr": "10.0.0.1", 00:21:44.668 "trsvcid": "60836" 00:21:44.668 }, 00:21:44.668 "auth": { 00:21:44.668 "state": "completed", 00:21:44.668 "digest": "sha512", 00:21:44.668 "dhgroup": "ffdhe4096" 00:21:44.668 } 00:21:44.668 } 00:21:44.668 ]' 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.668 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.927 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.927 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.927 06:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.927 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:44.927 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:45.493 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.493 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.493 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.493 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.493 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.752 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.752 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.753 06:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.011 00:21:46.011 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.011 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.011 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.270 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.270 { 00:21:46.270 "cntlid": 127, 00:21:46.270 "qid": 0, 00:21:46.270 "state": "enabled", 00:21:46.270 "thread": "nvmf_tgt_poll_group_000", 00:21:46.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.270 "listen_address": { 00:21:46.270 "trtype": "TCP", 00:21:46.271 "adrfam": "IPv4", 00:21:46.271 "traddr": "10.0.0.2", 00:21:46.271 "trsvcid": "4420" 00:21:46.271 }, 00:21:46.271 "peer_address": { 00:21:46.271 "trtype": "TCP", 00:21:46.271 "adrfam": "IPv4", 00:21:46.271 "traddr": "10.0.0.1", 00:21:46.271 "trsvcid": "60864" 00:21:46.271 }, 00:21:46.271 "auth": { 00:21:46.271 "state": "completed", 00:21:46.271 "digest": "sha512", 00:21:46.271 "dhgroup": "ffdhe4096" 00:21:46.271 } 00:21:46.271 } 00:21:46.271 ]' 00:21:46.271 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.271 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.271 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.271 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:46.529 06:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:47.095 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.353 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.920 00:21:47.920 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.920 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.920 06:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.920 { 00:21:47.920 "cntlid": 129, 00:21:47.920 "qid": 0, 00:21:47.920 "state": "enabled", 00:21:47.920 "thread": "nvmf_tgt_poll_group_000", 00:21:47.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.920 "listen_address": { 00:21:47.920 "trtype": "TCP", 00:21:47.920 "adrfam": "IPv4", 00:21:47.920 "traddr": "10.0.0.2", 00:21:47.920 "trsvcid": "4420" 00:21:47.920 }, 00:21:47.920 "peer_address": { 00:21:47.920 "trtype": "TCP", 00:21:47.920 "adrfam": "IPv4", 00:21:47.920 "traddr": "10.0.0.1", 00:21:47.920 "trsvcid": "52366" 00:21:47.920 }, 00:21:47.920 "auth": { 00:21:47.920 "state": "completed", 00:21:47.920 "digest": "sha512", 00:21:47.920 "dhgroup": "ffdhe6144" 00:21:47.920 } 00:21:47.920 } 00:21:47.920 ]' 00:21:47.920 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.179 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.438 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:48.438 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.006 06:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.006 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.265 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.524 00:21:49.524 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.524 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.524 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.783 { 00:21:49.783 "cntlid": 131, 00:21:49.783 "qid": 0, 00:21:49.783 "state": "enabled", 00:21:49.783 "thread": "nvmf_tgt_poll_group_000", 00:21:49.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.783 "listen_address": { 00:21:49.783 "trtype": "TCP", 00:21:49.783 "adrfam": "IPv4", 00:21:49.783 "traddr": "10.0.0.2", 00:21:49.783 "trsvcid": "4420" 00:21:49.783 }, 00:21:49.783 "peer_address": { 00:21:49.783 "trtype": "TCP", 00:21:49.783 "adrfam": "IPv4", 00:21:49.783 "traddr": "10.0.0.1", 00:21:49.783 "trsvcid": "52390" 00:21:49.783 }, 00:21:49.783 "auth": { 00:21:49.783 "state": "completed", 00:21:49.783 "digest": "sha512", 00:21:49.783 "dhgroup": "ffdhe6144" 00:21:49.783 } 00:21:49.783 } 00:21:49.783 ]' 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.783 06:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.042 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:50.042 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:50.609 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.609 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.609 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.609 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.609 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.610 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.610 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.610 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.868 06:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.127 00:21:51.127 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.127 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.127 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.385 { 00:21:51.385 "cntlid": 133, 00:21:51.385 "qid": 0, 00:21:51.385 "state": "enabled", 00:21:51.385 "thread": "nvmf_tgt_poll_group_000", 00:21:51.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:51.385 "listen_address": { 00:21:51.385 "trtype": "TCP", 00:21:51.385 "adrfam": "IPv4", 00:21:51.385 "traddr": "10.0.0.2", 00:21:51.385 "trsvcid": "4420" 00:21:51.385 }, 00:21:51.385 "peer_address": { 00:21:51.385 "trtype": "TCP", 00:21:51.385 "adrfam": "IPv4", 00:21:51.385 "traddr": "10.0.0.1", 00:21:51.385 "trsvcid": "52414" 00:21:51.385 }, 00:21:51.385 "auth": { 00:21:51.385 "state": "completed", 00:21:51.385 "digest": "sha512", 00:21:51.385 "dhgroup": "ffdhe6144" 00:21:51.385 } 00:21:51.385 } 00:21:51.385 ]' 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.385 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.643 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.643 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.643 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.643 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:51.643 06:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.211 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.470 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.729 00:21:52.988 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.988 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.988 06:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.988 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.988 { 00:21:52.988 "cntlid": 135, 00:21:52.988 "qid": 0, 00:21:52.988 "state": "enabled", 00:21:52.988 "thread": "nvmf_tgt_poll_group_000", 00:21:52.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.988 "listen_address": { 00:21:52.988 "trtype": "TCP", 00:21:52.988 "adrfam": "IPv4", 00:21:52.988 "traddr": "10.0.0.2", 00:21:52.988 "trsvcid": "4420" 00:21:52.988 }, 00:21:52.988 "peer_address": { 00:21:52.988 "trtype": "TCP", 00:21:52.988 "adrfam": "IPv4", 00:21:52.988 "traddr": "10.0.0.1", 00:21:52.988 "trsvcid": "52426" 00:21:52.988 }, 00:21:52.988 "auth": { 00:21:52.988 "state": "completed", 00:21:52.989 "digest": "sha512", 00:21:52.989 "dhgroup": "ffdhe6144" 00:21:52.989 } 00:21:52.989 } 00:21:52.989 ]' 00:21:52.989 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.989 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.989 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.248 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.248 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.248 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.248 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.248 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.506 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:53.506 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.075 06:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.075 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.642 00:21:54.642 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.642 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.642 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.901 { 00:21:54.901 "cntlid": 137, 00:21:54.901 "qid": 0, 00:21:54.901 "state": "enabled", 00:21:54.901 "thread": "nvmf_tgt_poll_group_000", 00:21:54.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.901 "listen_address": { 00:21:54.901 "trtype": "TCP", 00:21:54.901 "adrfam": "IPv4", 00:21:54.901 "traddr": "10.0.0.2", 00:21:54.901 "trsvcid": "4420" 00:21:54.901 }, 00:21:54.901 "peer_address": { 00:21:54.901 "trtype": "TCP", 00:21:54.901 "adrfam": "IPv4", 00:21:54.901 "traddr": "10.0.0.1", 00:21:54.901 "trsvcid": "52452" 00:21:54.901 }, 00:21:54.901 "auth": { 00:21:54.901 "state": "completed", 00:21:54.901 "digest": "sha512", 00:21:54.901 "dhgroup": "ffdhe8192" 00:21:54.901 } 00:21:54.901 } 00:21:54.901 ]' 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.901 06:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.901 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.901 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.901 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.169 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:55.169 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.741 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.000 06:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.568 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.568 { 00:21:56.568 "cntlid": 139, 00:21:56.568 "qid": 0, 00:21:56.568 "state": "enabled", 00:21:56.568 "thread": "nvmf_tgt_poll_group_000", 00:21:56.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:56.568 "listen_address": { 00:21:56.568 "trtype": "TCP", 00:21:56.568 "adrfam": "IPv4", 00:21:56.568 "traddr": "10.0.0.2", 00:21:56.568 "trsvcid": "4420" 00:21:56.568 }, 00:21:56.568 "peer_address": { 00:21:56.568 "trtype": "TCP", 00:21:56.568 "adrfam": "IPv4", 00:21:56.568 "traddr": "10.0.0.1", 00:21:56.568 "trsvcid": "52472" 00:21:56.568 }, 00:21:56.568 "auth": { 00:21:56.568 "state": "completed", 00:21:56.568 "digest": "sha512", 00:21:56.568 "dhgroup": "ffdhe8192" 00:21:56.568 } 00:21:56.568 } 00:21:56.568 ]' 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.568 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.827 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.827 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.827 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.827 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.827 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.086 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:57.086 06:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: --dhchap-ctrl-secret DHHC-1:02:MjQ0MTY4MzUzMDk3MDQzY2U1NTcxMzI0ZDYxOWUxZTk0YWIzYWRjODQ4NjEwMWYxol2dXg==: 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.654 06:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.221 00:21:58.221 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.221 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.221 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.481 { 00:21:58.481 "cntlid": 141, 00:21:58.481 "qid": 0, 00:21:58.481 "state": "enabled", 00:21:58.481 "thread": "nvmf_tgt_poll_group_000", 00:21:58.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.481 "listen_address": { 00:21:58.481 "trtype": "TCP", 00:21:58.481 "adrfam": "IPv4", 00:21:58.481 "traddr": "10.0.0.2", 00:21:58.481 "trsvcid": "4420" 00:21:58.481 }, 00:21:58.481 "peer_address": { 00:21:58.481 "trtype": "TCP", 00:21:58.481 "adrfam": "IPv4", 00:21:58.481 "traddr": "10.0.0.1", 00:21:58.481 "trsvcid": "52120" 00:21:58.481 }, 00:21:58.481 "auth": { 00:21:58.481 "state": "completed", 00:21:58.481 "digest": "sha512", 00:21:58.481 "dhgroup": "ffdhe8192" 00:21:58.481 } 00:21:58.481 } 00:21:58.481 ]' 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.481 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.740 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:58.740 06:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:01:ODcxMDMwYzllN2JlMWU4MzM1NzZlNDdkOGMyNTc1Y2ap7Ehz: 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.307 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.566 06:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.134 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.134 { 00:22:00.134 "cntlid": 143, 00:22:00.134 "qid": 0, 00:22:00.134 "state": "enabled", 00:22:00.134 "thread": "nvmf_tgt_poll_group_000", 00:22:00.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:00.134 "listen_address": { 00:22:00.134 "trtype": "TCP", 00:22:00.134 "adrfam": "IPv4", 00:22:00.134 "traddr": "10.0.0.2", 00:22:00.134 "trsvcid": "4420" 00:22:00.134 }, 00:22:00.134 "peer_address": { 00:22:00.134 "trtype": "TCP", 00:22:00.134 "adrfam": "IPv4", 00:22:00.134 "traddr": "10.0.0.1", 00:22:00.134 "trsvcid": "52134" 00:22:00.134 }, 00:22:00.134 "auth": { 00:22:00.134 "state": "completed", 00:22:00.134 "digest": "sha512", 00:22:00.134 "dhgroup": "ffdhe8192" 00:22:00.134 } 00:22:00.134 } 00:22:00.134 ]' 00:22:00.134 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.393 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.652 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:00.652 06:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.220 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.480 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.480 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.480 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.480 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.738 00:22:01.738 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.738 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.738 06:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.997 { 00:22:01.997 "cntlid": 145, 00:22:01.997 "qid": 0, 00:22:01.997 "state": "enabled", 00:22:01.997 "thread": "nvmf_tgt_poll_group_000", 00:22:01.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.997 "listen_address": { 00:22:01.997 "trtype": "TCP", 00:22:01.997 "adrfam": "IPv4", 00:22:01.997 "traddr": "10.0.0.2", 00:22:01.997 "trsvcid": "4420" 00:22:01.997 }, 00:22:01.997 "peer_address": { 00:22:01.997 "trtype": "TCP", 00:22:01.997 "adrfam": "IPv4", 00:22:01.997 "traddr": "10.0.0.1", 00:22:01.997 "trsvcid": "52154" 00:22:01.997 }, 00:22:01.997 "auth": { 00:22:01.997 "state": "completed", 00:22:01.997 "digest": "sha512", 00:22:01.997 "dhgroup": "ffdhe8192" 00:22:01.997 } 00:22:01.997 } 00:22:01.997 ]' 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.997 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:22:02.256 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWY5NDhmNDExYzM3N2FmZDJjNjI0Y2U1MzQ0MGQ1NzdjNzY0ZDFlMTFlMDQ4NjgwvbiYKw==: --dhchap-ctrl-secret DHHC-1:03:Y2I4OGZjMDhjYTUyNzZiM2ViMDlkMTRkNjU3NGRlZjIwYzYxNTE4NjBiYTUyZThiYjRkNWMxZDFhNjBkMGJiNWHzWo0=: 00:22:02.823 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.823 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.823 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.823 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.082 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.083 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:03.083 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:03.083 06:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:03.341 request: 00:22:03.341 { 00:22:03.341 "name": "nvme0", 00:22:03.341 "trtype": "tcp", 00:22:03.341 "traddr": "10.0.0.2", 00:22:03.341 "adrfam": "ipv4", 00:22:03.341 "trsvcid": "4420", 00:22:03.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:03.341 "prchk_reftag": false, 00:22:03.341 "prchk_guard": false, 00:22:03.341 "hdgst": false, 00:22:03.341 "ddgst": false, 00:22:03.341 "dhchap_key": "key2", 00:22:03.341 "allow_unrecognized_csi": false, 00:22:03.341 "method": "bdev_nvme_attach_controller", 00:22:03.341 "req_id": 1 00:22:03.341 } 00:22:03.341 Got JSON-RPC error response 00:22:03.341 response: 00:22:03.341 { 00:22:03.341 "code": -5, 00:22:03.341 "message": "Input/output error" 00:22:03.341 } 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.341 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.342 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:03.909 request: 00:22:03.909 { 00:22:03.909 "name": "nvme0", 00:22:03.909 "trtype": "tcp", 00:22:03.909 "traddr": "10.0.0.2", 00:22:03.909 "adrfam": "ipv4", 00:22:03.909 "trsvcid": "4420", 00:22:03.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:03.909 "prchk_reftag": false, 00:22:03.909 "prchk_guard": false, 00:22:03.909 "hdgst": false, 00:22:03.909 "ddgst": false, 00:22:03.909 "dhchap_key": "key1", 00:22:03.909 "dhchap_ctrlr_key": "ckey2", 00:22:03.909 "allow_unrecognized_csi": false, 00:22:03.909 "method": "bdev_nvme_attach_controller", 00:22:03.909 "req_id": 1 00:22:03.909 } 00:22:03.909 Got JSON-RPC error response 00:22:03.909 response: 00:22:03.909 { 00:22:03.909 "code": -5, 00:22:03.909 "message": "Input/output error" 00:22:03.909 } 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.909 06:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.477 request: 00:22:04.477 { 00:22:04.477 "name": "nvme0", 00:22:04.477 "trtype": "tcp", 00:22:04.477 "traddr": "10.0.0.2", 00:22:04.477 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "4420", 00:22:04.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:04.478 "prchk_reftag": false, 00:22:04.478 "prchk_guard": false, 00:22:04.478 "hdgst": false, 00:22:04.478 "ddgst": false, 00:22:04.478 "dhchap_key": "key1", 00:22:04.478 "dhchap_ctrlr_key": "ckey1", 00:22:04.478 "allow_unrecognized_csi": false, 00:22:04.478 "method": "bdev_nvme_attach_controller", 00:22:04.478 "req_id": 1 00:22:04.478 } 00:22:04.478 Got JSON-RPC error response 00:22:04.478 response: 00:22:04.478 { 00:22:04.478 "code": -5, 00:22:04.478 "message": "Input/output error" 00:22:04.478 } 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982081 ']' 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982081' 00:22:04.478 killing process with pid 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982081 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1003756 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1003756 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003756 ']' 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.478 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1003756 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003756 ']' 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.737 06:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.996 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.996 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:04.996 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:04.996 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.996 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.996 null0 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qPB 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dVK ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dVK 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BPS 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3x2 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3x2 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5PN 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZML ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZML 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lOx 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.255 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.821 nvme0n1 00:22:06.079 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.079 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.079 06:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.079 { 00:22:06.079 "cntlid": 1, 00:22:06.079 "qid": 0, 00:22:06.079 "state": "enabled", 00:22:06.079 "thread": "nvmf_tgt_poll_group_000", 00:22:06.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:06.079 "listen_address": { 00:22:06.079 "trtype": "TCP", 00:22:06.079 "adrfam": "IPv4", 00:22:06.079 "traddr": "10.0.0.2", 00:22:06.079 "trsvcid": "4420" 00:22:06.079 }, 00:22:06.079 "peer_address": { 00:22:06.079 "trtype": "TCP", 00:22:06.079 "adrfam": "IPv4", 00:22:06.079 "traddr": "10.0.0.1", 00:22:06.079 "trsvcid": "52202" 00:22:06.079 }, 00:22:06.079 "auth": { 00:22:06.079 "state": "completed", 00:22:06.079 "digest": "sha512", 00:22:06.079 "dhgroup": "ffdhe8192" 00:22:06.079 } 00:22:06.079 } 00:22:06.079 ]' 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.079 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.338 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.338 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.338 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.338 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.338 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.597 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:06.597 06:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:07.164 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:07.422 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:07.422 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.423 request: 00:22:07.423 { 00:22:07.423 "name": "nvme0", 00:22:07.423 "trtype": "tcp", 00:22:07.423 "traddr": "10.0.0.2", 00:22:07.423 "adrfam": "ipv4", 00:22:07.423 "trsvcid": "4420", 00:22:07.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:07.423 "prchk_reftag": false, 00:22:07.423 "prchk_guard": false, 00:22:07.423 "hdgst": false, 00:22:07.423 "ddgst": false, 00:22:07.423 "dhchap_key": "key3", 00:22:07.423 "allow_unrecognized_csi": false, 00:22:07.423 "method": "bdev_nvme_attach_controller", 00:22:07.423 "req_id": 1 00:22:07.423 } 00:22:07.423 Got JSON-RPC error response 00:22:07.423 response: 00:22:07.423 { 00:22:07.423 "code": -5, 00:22:07.423 "message": "Input/output error" 00:22:07.423 } 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:07.423 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.682 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.943 request: 00:22:07.943 { 00:22:07.943 "name": "nvme0", 00:22:07.943 "trtype": "tcp", 00:22:07.943 "traddr": "10.0.0.2", 00:22:07.943 "adrfam": "ipv4", 00:22:07.943 "trsvcid": "4420", 00:22:07.943 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:07.943 "prchk_reftag": false, 00:22:07.943 "prchk_guard": false, 00:22:07.943 "hdgst": false, 00:22:07.943 "ddgst": false, 00:22:07.943 "dhchap_key": "key3", 00:22:07.943 "allow_unrecognized_csi": false, 00:22:07.943 "method": "bdev_nvme_attach_controller", 00:22:07.943 "req_id": 1 00:22:07.943 } 00:22:07.943 Got JSON-RPC error response 00:22:07.943 response: 00:22:07.943 { 00:22:07.943 "code": -5, 00:22:07.943 "message": "Input/output error" 00:22:07.943 } 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.943 06:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.202 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.461 request: 00:22:08.461 { 00:22:08.461 "name": "nvme0", 00:22:08.461 "trtype": "tcp", 00:22:08.461 "traddr": "10.0.0.2", 00:22:08.461 "adrfam": "ipv4", 00:22:08.461 "trsvcid": "4420", 00:22:08.461 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:08.461 "prchk_reftag": false, 00:22:08.461 "prchk_guard": false, 00:22:08.461 "hdgst": false, 00:22:08.461 "ddgst": false, 00:22:08.461 "dhchap_key": "key0", 00:22:08.461 "dhchap_ctrlr_key": "key1", 00:22:08.461 "allow_unrecognized_csi": false, 00:22:08.461 "method": "bdev_nvme_attach_controller", 00:22:08.461 "req_id": 1 00:22:08.461 } 00:22:08.461 Got JSON-RPC error response 00:22:08.461 response: 00:22:08.461 { 00:22:08.461 "code": -5, 00:22:08.461 "message": "Input/output error" 00:22:08.461 } 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:08.461 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:08.720 nvme0n1 00:22:08.720 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:08.720 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:08.720 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.979 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.979 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.979 06:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.238 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.804 nvme0n1 00:22:09.804 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:09.804 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:09.804 06:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:10.064 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.322 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.322 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:10.322 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: --dhchap-ctrl-secret DHHC-1:03:YjJkOTBjZTgyMzE0NTMxNDBkZTk4YjY4MjQ0Njg2MmMwZGIzODJlZTg4Y2FmNWU4OWNhZGU1NTFjYzdlNGUyNGOabN4=: 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.892 06:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.150 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.409 request: 00:22:11.409 { 00:22:11.409 "name": "nvme0", 00:22:11.409 "trtype": "tcp", 00:22:11.409 "traddr": "10.0.0.2", 00:22:11.409 "adrfam": "ipv4", 00:22:11.409 "trsvcid": "4420", 00:22:11.409 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:11.409 "prchk_reftag": false, 00:22:11.409 "prchk_guard": false, 00:22:11.409 "hdgst": false, 00:22:11.409 "ddgst": false, 00:22:11.409 "dhchap_key": "key1", 00:22:11.409 "allow_unrecognized_csi": false, 00:22:11.409 "method": "bdev_nvme_attach_controller", 00:22:11.409 "req_id": 1 00:22:11.409 } 00:22:11.409 Got JSON-RPC error response 00:22:11.409 response: 00:22:11.409 { 00:22:11.409 "code": -5, 00:22:11.409 "message": "Input/output error" 00:22:11.409 } 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:11.409 06:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.486 nvme0n1 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.486 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:12.745 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:13.003 nvme0n1 00:22:13.003 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:13.003 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:13.003 06:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: '' 2s 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: ]] 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Nzg3YjExYWZhNjUyMzg4NmE5Y2IyMWIwZDFlMzEwYmX1Mlr7: 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:13.262 06:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: 2s 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: ]] 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTE0ZGEzNjhmZTQzNTllYjgzMGFlNDM4ZTE4YTZlNDgxM2JlNWE0MGU3MWNjMGZj0GJL6A==: 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:15.799 06:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:17.705 06:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.273 nvme0n1 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.273 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.840 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:18.840 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:18.840 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.840 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.840 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:18.841 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.841 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.841 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.841 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:18.841 06:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:19.099 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:19.099 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:19.099 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.358 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:19.926 request: 00:22:19.926 { 00:22:19.926 "name": "nvme0", 00:22:19.926 "dhchap_key": "key1", 00:22:19.926 "dhchap_ctrlr_key": "key3", 00:22:19.926 "method": "bdev_nvme_set_keys", 00:22:19.926 "req_id": 1 00:22:19.926 } 00:22:19.926 Got JSON-RPC error response 00:22:19.926 response: 00:22:19.926 { 00:22:19.926 "code": -13, 00:22:19.926 "message": "Permission denied" 00:22:19.926 } 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:19.926 06:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:20.863 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:20.863 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:20.863 06:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:21.122 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:22.059 nvme0n1 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:22.059 06:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:22.317 request: 00:22:22.317 { 00:22:22.317 "name": "nvme0", 00:22:22.317 "dhchap_key": "key2", 00:22:22.317 "dhchap_ctrlr_key": "key0", 00:22:22.317 "method": "bdev_nvme_set_keys", 00:22:22.317 "req_id": 1 00:22:22.317 } 00:22:22.317 Got JSON-RPC error response 00:22:22.317 response: 00:22:22.317 { 00:22:22.317 "code": -13, 00:22:22.317 "message": "Permission denied" 00:22:22.317 } 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:22.317 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.576 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:22.576 06:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:23.512 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:23.512 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:23.512 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 982116 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982116 ']' 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982116 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982116 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982116' 00:22:23.771 killing process with pid 982116 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982116 00:22:23.771 06:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982116 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.029 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.029 rmmod nvme_tcp 00:22:24.288 rmmod nvme_fabrics 00:22:24.288 rmmod nvme_keyring 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1003756 ']' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1003756 ']' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1003756' 00:22:24.288 killing process with pid 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1003756 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.288 06:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qPB /tmp/spdk.key-sha256.BPS /tmp/spdk.key-sha384.5PN /tmp/spdk.key-sha512.lOx /tmp/spdk.key-sha512.dVK /tmp/spdk.key-sha384.3x2 /tmp/spdk.key-sha256.ZML '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:26.824 00:22:26.824 real 2m31.525s 00:22:26.824 user 5m49.464s 00:22:26.824 sys 0m24.127s 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.824 ************************************ 00:22:26.824 END TEST nvmf_auth_target 00:22:26.824 ************************************ 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.824 06:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:26.824 ************************************ 00:22:26.824 START TEST nvmf_bdevio_no_huge 00:22:26.824 ************************************ 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:26.825 * Looking for test storage... 00:22:26.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.825 --rc genhtml_branch_coverage=1 00:22:26.825 --rc genhtml_function_coverage=1 00:22:26.825 --rc genhtml_legend=1 00:22:26.825 --rc geninfo_all_blocks=1 00:22:26.825 --rc geninfo_unexecuted_blocks=1 00:22:26.825 00:22:26.825 ' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.825 --rc genhtml_branch_coverage=1 00:22:26.825 --rc genhtml_function_coverage=1 00:22:26.825 --rc genhtml_legend=1 00:22:26.825 --rc geninfo_all_blocks=1 00:22:26.825 --rc geninfo_unexecuted_blocks=1 00:22:26.825 00:22:26.825 ' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.825 --rc genhtml_branch_coverage=1 00:22:26.825 --rc genhtml_function_coverage=1 00:22:26.825 --rc genhtml_legend=1 00:22:26.825 --rc geninfo_all_blocks=1 00:22:26.825 --rc geninfo_unexecuted_blocks=1 00:22:26.825 00:22:26.825 ' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.825 --rc genhtml_branch_coverage=1 00:22:26.825 --rc genhtml_function_coverage=1 00:22:26.825 --rc genhtml_legend=1 00:22:26.825 --rc geninfo_all_blocks=1 00:22:26.825 --rc geninfo_unexecuted_blocks=1 00:22:26.825 00:22:26.825 ' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.825 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.826 06:12:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.395 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:33.396 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:33.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:33.396 Found net devices under 0000:af:00.0: cvl_0_0 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:33.396 Found net devices under 0000:af:00.1: cvl_0_1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:22:33.396 00:22:33.396 --- 10.0.0.2 ping statistics --- 00:22:33.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.396 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:33.396 00:22:33.396 --- 10.0.0.1 ping statistics --- 00:22:33.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.396 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1010487 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1010487 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1010487 ']' 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.396 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.396 [2024-12-15 06:12:52.695695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:33.396 [2024-12-15 06:12:52.695744] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:33.396 [2024-12-15 06:12:52.776166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.396 [2024-12-15 06:12:52.811347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.396 [2024-12-15 06:12:52.811381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.397 [2024-12-15 06:12:52.811388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.397 [2024-12-15 06:12:52.811393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.397 [2024-12-15 06:12:52.811398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.397 [2024-12-15 06:12:52.812518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.397 [2024-12-15 06:12:52.812610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:33.397 [2024-12-15 06:12:52.812695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.397 [2024-12-15 06:12:52.812696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 [2024-12-15 06:12:52.965122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 Malloc0 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.397 06:12:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.397 [2024-12-15 06:12:53.005420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.397 { 00:22:33.397 "params": { 00:22:33.397 "name": "Nvme$subsystem", 00:22:33.397 "trtype": "$TEST_TRANSPORT", 00:22:33.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.397 "adrfam": "ipv4", 00:22:33.397 "trsvcid": "$NVMF_PORT", 00:22:33.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.397 "hdgst": ${hdgst:-false}, 00:22:33.397 "ddgst": ${ddgst:-false} 00:22:33.397 }, 00:22:33.397 "method": "bdev_nvme_attach_controller" 00:22:33.397 } 00:22:33.397 EOF 00:22:33.397 )") 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:33.397 06:12:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:33.397 "params": { 00:22:33.397 "name": "Nvme1", 00:22:33.397 "trtype": "tcp", 00:22:33.397 "traddr": "10.0.0.2", 00:22:33.397 "adrfam": "ipv4", 00:22:33.397 "trsvcid": "4420", 00:22:33.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.397 "hdgst": false, 00:22:33.397 "ddgst": false 00:22:33.397 }, 00:22:33.397 "method": "bdev_nvme_attach_controller" 00:22:33.397 }' 00:22:33.397 [2024-12-15 06:12:53.055864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:33.397 [2024-12-15 06:12:53.055906] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1010510 ] 00:22:33.397 [2024-12-15 06:12:53.133517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.397 [2024-12-15 06:12:53.170972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.397 [2024-12-15 06:12:53.171080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.397 [2024-12-15 06:12:53.171080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.397 I/O targets: 00:22:33.397 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:33.397 00:22:33.397 00:22:33.397 CUnit - A unit testing framework for C - Version 2.1-3 00:22:33.397 http://cunit.sourceforge.net/ 00:22:33.397 00:22:33.397 00:22:33.397 Suite: bdevio tests on: Nvme1n1 00:22:33.397 Test: blockdev write read block ...passed 00:22:33.656 Test: blockdev write zeroes read block ...passed 00:22:33.656 Test: blockdev write zeroes read no split ...passed 00:22:33.656 Test: blockdev write zeroes read split ...passed 00:22:33.656 Test: blockdev write zeroes read split partial ...passed 00:22:33.656 Test: blockdev reset ...[2024-12-15 06:12:53.613379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:33.656 [2024-12-15 06:12:53.613440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bad00 (9): Bad file descriptor 00:22:33.656 [2024-12-15 06:12:53.707866] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:33.656 passed 00:22:33.656 Test: blockdev write read 8 blocks ...passed 00:22:33.656 Test: blockdev write read size > 128k ...passed 00:22:33.656 Test: blockdev write read invalid size ...passed 00:22:33.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:33.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:33.915 Test: blockdev write read max offset ...passed 00:22:33.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:33.916 Test: blockdev writev readv 8 blocks ...passed 00:22:33.916 Test: blockdev writev readv 30 x 1block ...passed 00:22:33.916 Test: blockdev writev readv block ...passed 00:22:33.916 Test: blockdev writev readv size > 128k ...passed 00:22:33.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:33.916 Test: blockdev comparev and writev ...[2024-12-15 06:12:53.959778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.959811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.959824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.959832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:53.960653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:33.916 [2024-12-15 06:12:53.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:33.916 passed 00:22:33.916 Test: blockdev nvme passthru rw ...passed 00:22:33.916 Test: blockdev nvme passthru vendor specific ...[2024-12-15 06:12:54.042293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.916 [2024-12-15 06:12:54.042308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:54.042420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.916 [2024-12-15 06:12:54.042431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:54.042529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.916 [2024-12-15 06:12:54.042540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:33.916 [2024-12-15 06:12:54.042642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:33.916 [2024-12-15 06:12:54.042651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:33.916 passed 00:22:34.174 Test: blockdev nvme admin passthru ...passed 00:22:34.174 Test: blockdev copy ...passed 00:22:34.174 00:22:34.174 Run Summary: Type Total Ran Passed Failed Inactive 00:22:34.174 suites 1 1 n/a 0 0 00:22:34.174 tests 23 23 23 0 0 00:22:34.174 asserts 152 152 152 0 n/a 00:22:34.174 00:22:34.174 Elapsed time = 1.306 seconds 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.433 rmmod nvme_tcp 00:22:34.433 rmmod nvme_fabrics 00:22:34.433 rmmod nvme_keyring 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1010487 ']' 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1010487 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1010487 ']' 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1010487 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010487 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010487' 00:22:34.433 killing process with pid 1010487 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1010487 00:22:34.433 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1010487 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.692 06:12:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.228 00:22:37.228 real 0m10.279s 00:22:37.228 user 0m11.705s 00:22:37.228 sys 0m5.309s 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.228 ************************************ 00:22:37.228 END TEST nvmf_bdevio_no_huge 00:22:37.228 ************************************ 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.228 ************************************ 00:22:37.228 START TEST nvmf_tls 00:22:37.228 ************************************ 00:22:37.228 06:12:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.228 * Looking for test storage... 00:22:37.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.228 --rc genhtml_branch_coverage=1 00:22:37.228 --rc genhtml_function_coverage=1 00:22:37.228 --rc genhtml_legend=1 00:22:37.228 --rc geninfo_all_blocks=1 00:22:37.228 --rc geninfo_unexecuted_blocks=1 00:22:37.228 00:22:37.228 ' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.228 --rc genhtml_branch_coverage=1 00:22:37.228 --rc genhtml_function_coverage=1 00:22:37.228 --rc genhtml_legend=1 00:22:37.228 --rc geninfo_all_blocks=1 00:22:37.228 --rc geninfo_unexecuted_blocks=1 00:22:37.228 00:22:37.228 ' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.228 --rc genhtml_branch_coverage=1 00:22:37.228 --rc genhtml_function_coverage=1 00:22:37.228 --rc genhtml_legend=1 00:22:37.228 --rc geninfo_all_blocks=1 00:22:37.228 --rc geninfo_unexecuted_blocks=1 00:22:37.228 00:22:37.228 ' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.228 --rc genhtml_branch_coverage=1 00:22:37.228 --rc genhtml_function_coverage=1 00:22:37.228 --rc genhtml_legend=1 00:22:37.228 --rc geninfo_all_blocks=1 00:22:37.228 --rc geninfo_unexecuted_blocks=1 00:22:37.228 00:22:37.228 ' 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.228 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.229 06:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.795 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:43.796 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:43.796 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:43.796 Found net devices under 0000:af:00.0: cvl_0_0 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:43.796 Found net devices under 0000:af:00.1: cvl_0_1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:22:43.796 00:22:43.796 --- 10.0.0.2 ping statistics --- 00:22:43.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.796 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:22:43.796 00:22:43.796 --- 10.0.0.1 ping statistics --- 00:22:43.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.796 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.796 06:13:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1014211 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1014211 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1014211 ']' 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.796 [2024-12-15 06:13:03.072631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:43.796 [2024-12-15 06:13:03.072674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.796 [2024-12-15 06:13:03.150149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.796 [2024-12-15 06:13:03.171123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.796 [2024-12-15 06:13:03.171160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.796 [2024-12-15 06:13:03.171167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.796 [2024-12-15 06:13:03.171173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.796 [2024-12-15 06:13:03.171178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.796 [2024-12-15 06:13:03.171657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.796 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:43.797 true 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:43.797 06:13:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.056 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:44.056 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:44.056 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.315 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:44.573 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:44.573 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:44.573 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:44.832 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:44.832 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:45.091 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:45.091 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:45.091 06:13:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:45.091 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.091 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.whJkQn2VJC 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6s2Xw81u8E 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.whJkQn2VJC 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6s2Xw81u8E 00:22:45.349 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:45.607 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:45.866 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.whJkQn2VJC 00:22:45.866 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.whJkQn2VJC 00:22:45.866 06:13:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:46.125 [2024-12-15 06:13:06.101437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.125 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:46.384 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:46.384 [2024-12-15 06:13:06.458337] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:46.384 [2024-12-15 06:13:06.458585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.384 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.643 malloc0 00:22:46.643 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.901 06:13:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.whJkQn2VJC 00:22:46.901 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.160 06:13:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.whJkQn2VJC 00:22:59.367 Initializing NVMe Controllers 00:22:59.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.367 Initialization complete. Launching workers. 00:22:59.367 ======================================================== 00:22:59.367 Latency(us) 00:22:59.367 Device Information : IOPS MiB/s Average min max 00:22:59.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16865.95 65.88 3794.72 779.59 5753.16 00:22:59.367 ======================================================== 00:22:59.367 Total : 16865.95 65.88 3794.72 779.59 5753.16 00:22:59.367 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.whJkQn2VJC 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.whJkQn2VJC 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1016495 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1016495 /var/tmp/bdevperf.sock 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1016495 ']' 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.367 [2024-12-15 06:13:17.347086] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:59.367 [2024-12-15 06:13:17.347139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016495 ] 00:22:59.367 [2024-12-15 06:13:17.422010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.367 [2024-12-15 06:13:17.445108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.whJkQn2VJC 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.367 [2024-12-15 06:13:17.885301] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.367 TLSTESTn1 00:22:59.367 06:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.368 Running I/O for 10 seconds... 00:22:59.935 5366.00 IOPS, 20.96 MiB/s [2024-12-15T05:13:21.451Z] 5487.50 IOPS, 21.44 MiB/s [2024-12-15T05:13:22.387Z] 5532.00 IOPS, 21.61 MiB/s [2024-12-15T05:13:23.322Z] 5558.50 IOPS, 21.71 MiB/s [2024-12-15T05:13:24.258Z] 5551.20 IOPS, 21.68 MiB/s [2024-12-15T05:13:25.193Z] 5562.33 IOPS, 21.73 MiB/s [2024-12-15T05:13:26.126Z] 5557.57 IOPS, 21.71 MiB/s [2024-12-15T05:13:27.502Z] 5544.88 IOPS, 21.66 MiB/s [2024-12-15T05:13:28.437Z] 5564.33 IOPS, 21.74 MiB/s [2024-12-15T05:13:28.437Z] 5571.60 IOPS, 21.76 MiB/s 00:23:08.297 Latency(us) 00:23:08.297 [2024-12-15T05:13:28.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.297 Verification LBA range: start 0x0 length 0x2000 00:23:08.297 TLSTESTn1 : 10.02 5575.04 21.78 0.00 0.00 22922.87 4743.56 28711.01 00:23:08.297 [2024-12-15T05:13:28.437Z] =================================================================================================================== 00:23:08.297 [2024-12-15T05:13:28.437Z] Total : 5575.04 21.78 0.00 0.00 22922.87 4743.56 28711.01 00:23:08.297 { 00:23:08.297 "results": [ 00:23:08.297 { 00:23:08.297 "job": "TLSTESTn1", 00:23:08.297 "core_mask": "0x4", 00:23:08.297 "workload": "verify", 00:23:08.297 "status": "finished", 00:23:08.297 "verify_range": { 00:23:08.297 "start": 0, 00:23:08.297 "length": 8192 00:23:08.297 }, 00:23:08.297 "queue_depth": 128, 00:23:08.297 "io_size": 4096, 00:23:08.297 "runtime": 10.016435, 00:23:08.297 "iops": 5575.037425990385, 00:23:08.297 "mibps": 21.77748994527494, 00:23:08.297 "io_failed": 0, 00:23:08.297 "io_timeout": 0, 00:23:08.297 "avg_latency_us": 22922.871637903543, 00:23:08.297 "min_latency_us": 4743.558095238095, 00:23:08.297 "max_latency_us": 28711.009523809524 00:23:08.297 } 00:23:08.297 ], 00:23:08.297 "core_count": 1 00:23:08.297 } 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1016495 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1016495 ']' 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1016495 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016495 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016495' 00:23:08.297 killing process with pid 1016495 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1016495 00:23:08.297 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.297 00:23:08.297 Latency(us) 00:23:08.297 [2024-12-15T05:13:28.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.297 [2024-12-15T05:13:28.437Z] =================================================================================================================== 00:23:08.297 [2024-12-15T05:13:28.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1016495 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6s2Xw81u8E 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6s2Xw81u8E 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6s2Xw81u8E 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6s2Xw81u8E 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018280 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018280 /var/tmp/bdevperf.sock 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018280 ']' 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.297 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.297 [2024-12-15 06:13:28.381718] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:08.297 [2024-12-15 06:13:28.381762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018280 ] 00:23:08.556 [2024-12-15 06:13:28.454761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.556 [2024-12-15 06:13:28.477332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.556 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.556 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.556 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6s2Xw81u8E 00:23:08.815 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.815 [2024-12-15 06:13:28.928437] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.815 [2024-12-15 06:13:28.939290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.815 [2024-12-15 06:13:28.939627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13440c0 (107): Transport endpoint is not connected 00:23:08.815 [2024-12-15 06:13:28.940619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13440c0 (9): Bad file descriptor 00:23:08.815 [2024-12-15 06:13:28.941620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:08.815 [2024-12-15 06:13:28.941632] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.815 [2024-12-15 06:13:28.941643] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:08.815 [2024-12-15 06:13:28.941655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:08.815 request: 00:23:08.815 { 00:23:08.815 "name": "TLSTEST", 00:23:08.815 "trtype": "tcp", 00:23:08.815 "traddr": "10.0.0.2", 00:23:08.815 "adrfam": "ipv4", 00:23:08.815 "trsvcid": "4420", 00:23:08.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.815 "prchk_reftag": false, 00:23:08.815 "prchk_guard": false, 00:23:08.815 "hdgst": false, 00:23:08.815 "ddgst": false, 00:23:08.815 "psk": "key0", 00:23:08.815 "allow_unrecognized_csi": false, 00:23:08.815 "method": "bdev_nvme_attach_controller", 00:23:08.815 "req_id": 1 00:23:08.815 } 00:23:08.815 Got JSON-RPC error response 00:23:08.815 response: 00:23:08.815 { 00:23:08.815 "code": -5, 00:23:08.815 "message": "Input/output error" 00:23:08.815 } 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018280 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018280 ']' 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018280 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.075 06:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018280 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018280' 00:23:09.075 killing process with pid 1018280 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018280 00:23:09.075 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.075 00:23:09.075 Latency(us) 00:23:09.075 [2024-12-15T05:13:29.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.075 [2024-12-15T05:13:29.215Z] =================================================================================================================== 00:23:09.075 [2024-12-15T05:13:29.215Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018280 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.whJkQn2VJC 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.whJkQn2VJC 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.whJkQn2VJC 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.whJkQn2VJC 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018498 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018498 /var/tmp/bdevperf.sock 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018498 ']' 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.075 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.075 [2024-12-15 06:13:29.206224] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:09.075 [2024-12-15 06:13:29.206271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018498 ] 00:23:09.334 [2024-12-15 06:13:29.282973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.334 [2024-12-15 06:13:29.305956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.334 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.334 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.334 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.whJkQn2VJC 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:09.705 [2024-12-15 06:13:29.753923] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.705 [2024-12-15 06:13:29.761281] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.705 [2024-12-15 06:13:29.761304] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:09.705 [2024-12-15 06:13:29.761326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.705 [2024-12-15 06:13:29.762224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c0c0 (107): Transport endpoint is not connected 00:23:09.705 [2024-12-15 06:13:29.763217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x254c0c0 (9): Bad file descriptor 00:23:09.705 [2024-12-15 06:13:29.764218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:09.705 [2024-12-15 06:13:29.764230] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.705 [2024-12-15 06:13:29.764240] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:09.705 [2024-12-15 06:13:29.764255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:09.705 request: 00:23:09.705 { 00:23:09.705 "name": "TLSTEST", 00:23:09.705 "trtype": "tcp", 00:23:09.705 "traddr": "10.0.0.2", 00:23:09.705 "adrfam": "ipv4", 00:23:09.705 "trsvcid": "4420", 00:23:09.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.705 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.705 "prchk_reftag": false, 00:23:09.705 "prchk_guard": false, 00:23:09.705 "hdgst": false, 00:23:09.705 "ddgst": false, 00:23:09.705 "psk": "key0", 00:23:09.705 "allow_unrecognized_csi": false, 00:23:09.705 "method": "bdev_nvme_attach_controller", 00:23:09.705 "req_id": 1 00:23:09.705 } 00:23:09.705 Got JSON-RPC error response 00:23:09.705 response: 00:23:09.705 { 00:23:09.705 "code": -5, 00:23:09.705 "message": "Input/output error" 00:23:09.705 } 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018498 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018498 ']' 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018498 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.705 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.706 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018498 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018498' 00:23:09.965 killing process with pid 1018498 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018498 00:23:09.965 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.965 00:23:09.965 Latency(us) 00:23:09.965 [2024-12-15T05:13:30.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.965 [2024-12-15T05:13:30.105Z] =================================================================================================================== 00:23:09.965 [2024-12-15T05:13:30.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018498 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.whJkQn2VJC 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.whJkQn2VJC 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.whJkQn2VJC 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.whJkQn2VJC 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018523 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018523 /var/tmp/bdevperf.sock 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018523 ']' 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.965 06:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.965 [2024-12-15 06:13:30.040697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:09.965 [2024-12-15 06:13:30.040747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018523 ] 00:23:10.224 [2024-12-15 06:13:30.118257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.224 [2024-12-15 06:13:30.141057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.224 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.224 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.224 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.whJkQn2VJC 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.483 [2024-12-15 06:13:30.588247] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.483 [2024-12-15 06:13:30.593412] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.483 [2024-12-15 06:13:30.593435] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:10.483 [2024-12-15 06:13:30.593460] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:10.483 [2024-12-15 06:13:30.593513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7d0c0 (107): Transport endpoint is not connected 00:23:10.483 [2024-12-15 06:13:30.594501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7d0c0 (9): Bad file descriptor 00:23:10.483 [2024-12-15 06:13:30.595501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:10.483 [2024-12-15 06:13:30.595516] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:10.483 [2024-12-15 06:13:30.595527] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:10.483 [2024-12-15 06:13:30.595539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:10.483 request: 00:23:10.483 { 00:23:10.483 "name": "TLSTEST", 00:23:10.483 "trtype": "tcp", 00:23:10.483 "traddr": "10.0.0.2", 00:23:10.483 "adrfam": "ipv4", 00:23:10.483 "trsvcid": "4420", 00:23:10.483 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.483 "prchk_reftag": false, 00:23:10.483 "prchk_guard": false, 00:23:10.483 "hdgst": false, 00:23:10.483 "ddgst": false, 00:23:10.483 "psk": "key0", 00:23:10.483 "allow_unrecognized_csi": false, 00:23:10.483 "method": "bdev_nvme_attach_controller", 00:23:10.483 "req_id": 1 00:23:10.483 } 00:23:10.483 Got JSON-RPC error response 00:23:10.483 response: 00:23:10.483 { 00:23:10.483 "code": -5, 00:23:10.483 "message": "Input/output error" 00:23:10.483 } 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018523 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018523 ']' 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018523 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.483 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018523 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018523' 00:23:10.743 killing process with pid 1018523 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018523 00:23:10.743 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.743 00:23:10.743 Latency(us) 00:23:10.743 [2024-12-15T05:13:30.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.743 [2024-12-15T05:13:30.883Z] =================================================================================================================== 00:23:10.743 [2024-12-15T05:13:30.883Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018523 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018747 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018747 /var/tmp/bdevperf.sock 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018747 ']' 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.743 06:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.743 [2024-12-15 06:13:30.857334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:10.743 [2024-12-15 06:13:30.857379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018747 ] 00:23:11.002 [2024-12-15 06:13:30.931415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.002 [2024-12-15 06:13:30.953273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.002 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.002 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.002 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:11.260 [2024-12-15 06:13:31.211834] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:11.260 [2024-12-15 06:13:31.211859] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.260 request: 00:23:11.260 { 00:23:11.260 "name": "key0", 00:23:11.260 "path": "", 00:23:11.260 "method": "keyring_file_add_key", 00:23:11.260 "req_id": 1 00:23:11.260 } 00:23:11.260 Got JSON-RPC error response 00:23:11.260 response: 00:23:11.260 { 00:23:11.260 "code": -1, 00:23:11.260 "message": "Operation not permitted" 00:23:11.260 } 00:23:11.260 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.519 [2024-12-15 06:13:31.408424] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.519 [2024-12-15 06:13:31.408460] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:11.519 request: 00:23:11.519 { 00:23:11.519 "name": "TLSTEST", 00:23:11.519 "trtype": "tcp", 00:23:11.519 "traddr": "10.0.0.2", 00:23:11.519 "adrfam": "ipv4", 00:23:11.519 "trsvcid": "4420", 00:23:11.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.519 "prchk_reftag": false, 00:23:11.519 "prchk_guard": false, 00:23:11.519 "hdgst": false, 00:23:11.519 "ddgst": false, 00:23:11.519 "psk": "key0", 00:23:11.519 "allow_unrecognized_csi": false, 00:23:11.519 "method": "bdev_nvme_attach_controller", 00:23:11.519 "req_id": 1 00:23:11.519 } 00:23:11.519 Got JSON-RPC error response 00:23:11.519 response: 00:23:11.519 { 00:23:11.519 "code": -126, 00:23:11.519 "message": "Required key not available" 00:23:11.519 } 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018747 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018747 ']' 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018747 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018747 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018747' 00:23:11.519 killing process with pid 1018747 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018747 00:23:11.519 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.519 00:23:11.519 Latency(us) 00:23:11.519 [2024-12-15T05:13:31.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.519 [2024-12-15T05:13:31.659Z] =================================================================================================================== 00:23:11.519 [2024-12-15T05:13:31.659Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018747 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1014211 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1014211 ']' 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1014211 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.519 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014211 00:23:11.839 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.839 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.839 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014211' 00:23:11.840 killing process with pid 1014211 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1014211 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1014211 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z0VrvUvZQZ 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z0VrvUvZQZ 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1018952 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1018952 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018952 ']' 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.840 06:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.133 [2024-12-15 06:13:31.963687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:12.133 [2024-12-15 06:13:31.963736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.133 [2024-12-15 06:13:32.044552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.133 [2024-12-15 06:13:32.065297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.133 [2024-12-15 06:13:32.065333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.133 [2024-12-15 06:13:32.065340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.133 [2024-12-15 06:13:32.065346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.133 [2024-12-15 06:13:32.065351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.133 [2024-12-15 06:13:32.065830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z0VrvUvZQZ 00:23:12.133 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.392 [2024-12-15 06:13:32.368069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.392 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.651 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:12.651 [2024-12-15 06:13:32.724983] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:12.651 [2024-12-15 06:13:32.725185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.651 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:12.910 malloc0 00:23:12.910 06:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.169 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:13.169 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z0VrvUvZQZ 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z0VrvUvZQZ 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019241 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019241 /var/tmp/bdevperf.sock 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019241 ']' 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.427 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.427 [2024-12-15 06:13:33.534913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:13.427 [2024-12-15 06:13:33.534963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019241 ] 00:23:13.686 [2024-12-15 06:13:33.612065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.686 [2024-12-15 06:13:33.633949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.686 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.686 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:13.686 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:13.945 06:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.204 [2024-12-15 06:13:34.092701] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.204 TLSTESTn1 00:23:14.204 06:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:14.204 Running I/O for 10 seconds... 00:23:16.515 5294.00 IOPS, 20.68 MiB/s [2024-12-15T05:13:37.591Z] 5495.00 IOPS, 21.46 MiB/s [2024-12-15T05:13:38.526Z] 5518.67 IOPS, 21.56 MiB/s [2024-12-15T05:13:39.462Z] 5534.00 IOPS, 21.62 MiB/s [2024-12-15T05:13:40.400Z] 5550.20 IOPS, 21.68 MiB/s [2024-12-15T05:13:41.334Z] 5520.50 IOPS, 21.56 MiB/s [2024-12-15T05:13:42.711Z] 5425.57 IOPS, 21.19 MiB/s [2024-12-15T05:13:43.279Z] 5301.38 IOPS, 20.71 MiB/s [2024-12-15T05:13:44.658Z] 5209.44 IOPS, 20.35 MiB/s [2024-12-15T05:13:44.658Z] 5136.00 IOPS, 20.06 MiB/s 00:23:24.518 Latency(us) 00:23:24.518 [2024-12-15T05:13:44.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.518 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.518 Verification LBA range: start 0x0 length 0x2000 00:23:24.518 TLSTESTn1 : 10.02 5137.50 20.07 0.00 0.00 24873.25 4743.56 34952.53 00:23:24.518 [2024-12-15T05:13:44.658Z] =================================================================================================================== 00:23:24.518 [2024-12-15T05:13:44.658Z] Total : 5137.50 20.07 0.00 0.00 24873.25 4743.56 34952.53 00:23:24.518 { 00:23:24.518 "results": [ 00:23:24.518 { 00:23:24.518 "job": "TLSTESTn1", 00:23:24.518 "core_mask": "0x4", 00:23:24.518 "workload": "verify", 00:23:24.518 "status": "finished", 00:23:24.518 "verify_range": { 00:23:24.518 "start": 0, 00:23:24.518 "length": 8192 00:23:24.518 }, 00:23:24.518 "queue_depth": 128, 00:23:24.518 "io_size": 4096, 00:23:24.518 "runtime": 10.02181, 00:23:24.518 "iops": 5137.495123136439, 00:23:24.518 "mibps": 20.068340324751716, 00:23:24.518 "io_failed": 0, 00:23:24.518 "io_timeout": 0, 00:23:24.518 "avg_latency_us": 24873.251556167204, 00:23:24.518 "min_latency_us": 4743.558095238095, 00:23:24.518 "max_latency_us": 34952.53333333333 00:23:24.518 } 00:23:24.518 ], 00:23:24.518 "core_count": 1 00:23:24.518 } 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1019241 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019241 ']' 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019241 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019241 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019241' 00:23:24.518 killing process with pid 1019241 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019241 00:23:24.518 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.518 00:23:24.518 Latency(us) 00:23:24.518 [2024-12-15T05:13:44.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.518 [2024-12-15T05:13:44.658Z] =================================================================================================================== 00:23:24.518 [2024-12-15T05:13:44.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019241 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z0VrvUvZQZ 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z0VrvUvZQZ 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z0VrvUvZQZ 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z0VrvUvZQZ 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z0VrvUvZQZ 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1020936 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.518 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1020936 /var/tmp/bdevperf.sock 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020936 ']' 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.519 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.519 [2024-12-15 06:13:44.579558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:24.519 [2024-12-15 06:13:44.579605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020936 ] 00:23:24.519 [2024-12-15 06:13:44.654754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.778 [2024-12-15 06:13:44.677796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.778 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.778 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.778 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:25.037 [2024-12-15 06:13:44.929279] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z0VrvUvZQZ': 0100666 00:23:25.037 [2024-12-15 06:13:44.929305] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:25.037 request: 00:23:25.037 { 00:23:25.037 "name": "key0", 00:23:25.037 "path": "/tmp/tmp.z0VrvUvZQZ", 00:23:25.037 "method": "keyring_file_add_key", 00:23:25.037 "req_id": 1 00:23:25.037 } 00:23:25.037 Got JSON-RPC error response 00:23:25.037 response: 00:23:25.037 { 00:23:25.037 "code": -1, 00:23:25.037 "message": "Operation not permitted" 00:23:25.037 } 00:23:25.037 06:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:25.037 [2024-12-15 06:13:45.121868] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.037 [2024-12-15 06:13:45.121897] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:25.037 request: 00:23:25.037 { 00:23:25.037 "name": "TLSTEST", 00:23:25.037 "trtype": "tcp", 00:23:25.037 "traddr": "10.0.0.2", 00:23:25.037 "adrfam": "ipv4", 00:23:25.037 "trsvcid": "4420", 00:23:25.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.037 "prchk_reftag": false, 00:23:25.037 "prchk_guard": false, 00:23:25.037 "hdgst": false, 00:23:25.037 "ddgst": false, 00:23:25.037 "psk": "key0", 00:23:25.037 "allow_unrecognized_csi": false, 00:23:25.037 "method": "bdev_nvme_attach_controller", 00:23:25.037 "req_id": 1 00:23:25.037 } 00:23:25.037 Got JSON-RPC error response 00:23:25.037 response: 00:23:25.037 { 00:23:25.037 "code": -126, 00:23:25.037 "message": "Required key not available" 00:23:25.037 } 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1020936 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020936 ']' 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020936 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.037 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020936 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020936' 00:23:25.296 killing process with pid 1020936 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020936 00:23:25.296 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.296 00:23:25.296 Latency(us) 00:23:25.296 [2024-12-15T05:13:45.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.296 [2024-12-15T05:13:45.436Z] =================================================================================================================== 00:23:25.296 [2024-12-15T05:13:45.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020936 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1018952 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018952 ']' 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018952 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018952 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018952' 00:23:25.296 killing process with pid 1018952 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018952 00:23:25.296 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018952 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021058 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021058 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021058 ']' 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.554 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.554 [2024-12-15 06:13:45.613975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:25.554 [2024-12-15 06:13:45.614025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.555 [2024-12-15 06:13:45.683872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.813 [2024-12-15 06:13:45.704602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.813 [2024-12-15 06:13:45.704639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.813 [2024-12-15 06:13:45.704647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.813 [2024-12-15 06:13:45.704653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.813 [2024-12-15 06:13:45.704659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.813 [2024-12-15 06:13:45.705149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z0VrvUvZQZ 00:23:25.813 06:13:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.072 [2024-12-15 06:13:46.003573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.072 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.072 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.330 [2024-12-15 06:13:46.380539] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.330 [2024-12-15 06:13:46.380738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.330 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.589 malloc0 00:23:26.589 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.847 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:26.847 [2024-12-15 06:13:46.937894] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z0VrvUvZQZ': 0100666 00:23:26.847 [2024-12-15 06:13:46.937923] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:26.847 request: 00:23:26.847 { 00:23:26.847 "name": "key0", 00:23:26.847 "path": "/tmp/tmp.z0VrvUvZQZ", 00:23:26.847 "method": "keyring_file_add_key", 00:23:26.847 "req_id": 1 00:23:26.847 } 00:23:26.847 Got JSON-RPC error response 00:23:26.847 response: 00:23:26.847 { 00:23:26.847 "code": -1, 00:23:26.847 "message": "Operation not permitted" 00:23:26.847 } 00:23:26.847 06:13:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.106 [2024-12-15 06:13:47.110353] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:27.106 [2024-12-15 06:13:47.110388] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:27.106 request: 00:23:27.106 { 00:23:27.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.107 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.107 "psk": "key0", 00:23:27.107 "method": "nvmf_subsystem_add_host", 00:23:27.107 "req_id": 1 00:23:27.107 } 00:23:27.107 Got JSON-RPC error response 00:23:27.107 response: 00:23:27.107 { 00:23:27.107 "code": -32603, 00:23:27.107 "message": "Internal error" 00:23:27.107 } 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1021058 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021058 ']' 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021058 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021058 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021058' 00:23:27.107 killing process with pid 1021058 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021058 00:23:27.107 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021058 00:23:27.365 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z0VrvUvZQZ 00:23:27.365 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021443 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021443 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021443 ']' 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.366 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.366 [2024-12-15 06:13:47.395774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:27.366 [2024-12-15 06:13:47.395819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.366 [2024-12-15 06:13:47.466275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.366 [2024-12-15 06:13:47.486760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.366 [2024-12-15 06:13:47.486802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.366 [2024-12-15 06:13:47.486809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.366 [2024-12-15 06:13:47.486815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.366 [2024-12-15 06:13:47.486820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.366 [2024-12-15 06:13:47.487321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z0VrvUvZQZ 00:23:27.625 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.883 [2024-12-15 06:13:47.785585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.883 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.883 06:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.142 [2024-12-15 06:13:48.146525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.142 [2024-12-15 06:13:48.146720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.142 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:28.401 malloc0 00:23:28.401 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.401 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:28.659 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1021764 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1021764 /var/tmp/bdevperf.sock 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021764 ']' 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.918 06:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.918 [2024-12-15 06:13:48.913835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:28.918 [2024-12-15 06:13:48.913884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021764 ] 00:23:28.918 [2024-12-15 06:13:48.989731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.918 [2024-12-15 06:13:49.012387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.177 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.177 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.177 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:29.177 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.436 [2024-12-15 06:13:49.464693] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.436 TLSTESTn1 00:23:29.436 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:30.005 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:30.005 "subsystems": [ 00:23:30.005 { 00:23:30.005 "subsystem": "keyring", 00:23:30.005 "config": [ 00:23:30.005 { 00:23:30.005 "method": "keyring_file_add_key", 00:23:30.005 "params": { 00:23:30.005 "name": "key0", 00:23:30.005 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:30.005 } 00:23:30.005 } 00:23:30.005 ] 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "subsystem": "iobuf", 00:23:30.005 "config": [ 00:23:30.005 { 00:23:30.005 "method": "iobuf_set_options", 00:23:30.005 "params": { 00:23:30.005 "small_pool_count": 8192, 00:23:30.005 "large_pool_count": 1024, 00:23:30.005 "small_bufsize": 8192, 00:23:30.005 "large_bufsize": 135168, 00:23:30.005 "enable_numa": false 00:23:30.005 } 00:23:30.005 } 00:23:30.005 ] 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "subsystem": "sock", 00:23:30.005 "config": [ 00:23:30.005 { 00:23:30.005 "method": "sock_set_default_impl", 00:23:30.005 "params": { 00:23:30.005 "impl_name": "posix" 00:23:30.005 } 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "method": "sock_impl_set_options", 00:23:30.005 "params": { 00:23:30.005 "impl_name": "ssl", 00:23:30.005 "recv_buf_size": 4096, 00:23:30.005 "send_buf_size": 4096, 00:23:30.005 "enable_recv_pipe": true, 00:23:30.005 "enable_quickack": false, 00:23:30.005 "enable_placement_id": 0, 00:23:30.005 "enable_zerocopy_send_server": true, 00:23:30.005 "enable_zerocopy_send_client": false, 00:23:30.005 "zerocopy_threshold": 0, 00:23:30.005 "tls_version": 0, 00:23:30.005 "enable_ktls": false 00:23:30.005 } 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "method": "sock_impl_set_options", 00:23:30.005 "params": { 00:23:30.005 "impl_name": "posix", 00:23:30.005 "recv_buf_size": 2097152, 00:23:30.005 "send_buf_size": 2097152, 00:23:30.005 "enable_recv_pipe": true, 00:23:30.005 "enable_quickack": false, 00:23:30.005 "enable_placement_id": 0, 00:23:30.005 "enable_zerocopy_send_server": true, 00:23:30.005 "enable_zerocopy_send_client": false, 00:23:30.005 "zerocopy_threshold": 0, 00:23:30.005 "tls_version": 0, 00:23:30.005 "enable_ktls": false 00:23:30.005 } 00:23:30.005 } 00:23:30.005 ] 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "subsystem": "vmd", 00:23:30.005 "config": [] 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "subsystem": "accel", 00:23:30.005 "config": [ 00:23:30.005 { 00:23:30.005 "method": "accel_set_options", 00:23:30.005 "params": { 00:23:30.005 "small_cache_size": 128, 00:23:30.005 "large_cache_size": 16, 00:23:30.005 "task_count": 2048, 00:23:30.005 "sequence_count": 2048, 00:23:30.005 "buf_count": 2048 00:23:30.005 } 00:23:30.005 } 00:23:30.005 ] 00:23:30.005 }, 00:23:30.005 { 00:23:30.005 "subsystem": "bdev", 00:23:30.005 "config": [ 00:23:30.005 { 00:23:30.005 "method": "bdev_set_options", 00:23:30.005 "params": { 00:23:30.005 "bdev_io_pool_size": 65535, 00:23:30.005 "bdev_io_cache_size": 256, 00:23:30.005 "bdev_auto_examine": true, 00:23:30.006 "iobuf_small_cache_size": 128, 00:23:30.006 "iobuf_large_cache_size": 16 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_raid_set_options", 00:23:30.006 "params": { 00:23:30.006 "process_window_size_kb": 1024, 00:23:30.006 "process_max_bandwidth_mb_sec": 0 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_iscsi_set_options", 00:23:30.006 "params": { 00:23:30.006 "timeout_sec": 30 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_nvme_set_options", 00:23:30.006 "params": { 00:23:30.006 "action_on_timeout": "none", 00:23:30.006 "timeout_us": 0, 00:23:30.006 "timeout_admin_us": 0, 00:23:30.006 "keep_alive_timeout_ms": 10000, 00:23:30.006 "arbitration_burst": 0, 00:23:30.006 "low_priority_weight": 0, 00:23:30.006 "medium_priority_weight": 0, 00:23:30.006 "high_priority_weight": 0, 00:23:30.006 "nvme_adminq_poll_period_us": 10000, 00:23:30.006 "nvme_ioq_poll_period_us": 0, 00:23:30.006 "io_queue_requests": 0, 00:23:30.006 "delay_cmd_submit": true, 00:23:30.006 "transport_retry_count": 4, 00:23:30.006 "bdev_retry_count": 3, 00:23:30.006 "transport_ack_timeout": 0, 00:23:30.006 "ctrlr_loss_timeout_sec": 0, 00:23:30.006 "reconnect_delay_sec": 0, 00:23:30.006 "fast_io_fail_timeout_sec": 0, 00:23:30.006 "disable_auto_failback": false, 00:23:30.006 "generate_uuids": false, 00:23:30.006 "transport_tos": 0, 00:23:30.006 "nvme_error_stat": false, 00:23:30.006 "rdma_srq_size": 0, 00:23:30.006 "io_path_stat": false, 00:23:30.006 "allow_accel_sequence": false, 00:23:30.006 "rdma_max_cq_size": 0, 00:23:30.006 "rdma_cm_event_timeout_ms": 0, 00:23:30.006 "dhchap_digests": [ 00:23:30.006 "sha256", 00:23:30.006 "sha384", 00:23:30.006 "sha512" 00:23:30.006 ], 00:23:30.006 "dhchap_dhgroups": [ 00:23:30.006 "null", 00:23:30.006 "ffdhe2048", 00:23:30.006 "ffdhe3072", 00:23:30.006 "ffdhe4096", 00:23:30.006 "ffdhe6144", 00:23:30.006 "ffdhe8192" 00:23:30.006 ], 00:23:30.006 "rdma_umr_per_io": false 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_nvme_set_hotplug", 00:23:30.006 "params": { 00:23:30.006 "period_us": 100000, 00:23:30.006 "enable": false 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_malloc_create", 00:23:30.006 "params": { 00:23:30.006 "name": "malloc0", 00:23:30.006 "num_blocks": 8192, 00:23:30.006 "block_size": 4096, 00:23:30.006 "physical_block_size": 4096, 00:23:30.006 "uuid": "ca68154b-024d-420c-b57b-cc6cc253fe99", 00:23:30.006 "optimal_io_boundary": 0, 00:23:30.006 "md_size": 0, 00:23:30.006 "dif_type": 0, 00:23:30.006 "dif_is_head_of_md": false, 00:23:30.006 "dif_pi_format": 0 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "bdev_wait_for_examine" 00:23:30.006 } 00:23:30.006 ] 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "subsystem": "nbd", 00:23:30.006 "config": [] 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "subsystem": "scheduler", 00:23:30.006 "config": [ 00:23:30.006 { 00:23:30.006 "method": "framework_set_scheduler", 00:23:30.006 "params": { 00:23:30.006 "name": "static" 00:23:30.006 } 00:23:30.006 } 00:23:30.006 ] 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "subsystem": "nvmf", 00:23:30.006 "config": [ 00:23:30.006 { 00:23:30.006 "method": "nvmf_set_config", 00:23:30.006 "params": { 00:23:30.006 "discovery_filter": "match_any", 00:23:30.006 "admin_cmd_passthru": { 00:23:30.006 "identify_ctrlr": false 00:23:30.006 }, 00:23:30.006 "dhchap_digests": [ 00:23:30.006 "sha256", 00:23:30.006 "sha384", 00:23:30.006 "sha512" 00:23:30.006 ], 00:23:30.006 "dhchap_dhgroups": [ 00:23:30.006 "null", 00:23:30.006 "ffdhe2048", 00:23:30.006 "ffdhe3072", 00:23:30.006 "ffdhe4096", 00:23:30.006 "ffdhe6144", 00:23:30.006 "ffdhe8192" 00:23:30.006 ] 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_set_max_subsystems", 00:23:30.006 "params": { 00:23:30.006 "max_subsystems": 1024 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_set_crdt", 00:23:30.006 "params": { 00:23:30.006 "crdt1": 0, 00:23:30.006 "crdt2": 0, 00:23:30.006 "crdt3": 0 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_create_transport", 00:23:30.006 "params": { 00:23:30.006 "trtype": "TCP", 00:23:30.006 "max_queue_depth": 128, 00:23:30.006 "max_io_qpairs_per_ctrlr": 127, 00:23:30.006 "in_capsule_data_size": 4096, 00:23:30.006 "max_io_size": 131072, 00:23:30.006 "io_unit_size": 131072, 00:23:30.006 "max_aq_depth": 128, 00:23:30.006 "num_shared_buffers": 511, 00:23:30.006 "buf_cache_size": 4294967295, 00:23:30.006 "dif_insert_or_strip": false, 00:23:30.006 "zcopy": false, 00:23:30.006 "c2h_success": false, 00:23:30.006 "sock_priority": 0, 00:23:30.006 "abort_timeout_sec": 1, 00:23:30.006 "ack_timeout": 0, 00:23:30.006 "data_wr_pool_size": 0 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_create_subsystem", 00:23:30.006 "params": { 00:23:30.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.006 "allow_any_host": false, 00:23:30.006 "serial_number": "SPDK00000000000001", 00:23:30.006 "model_number": "SPDK bdev Controller", 00:23:30.006 "max_namespaces": 10, 00:23:30.006 "min_cntlid": 1, 00:23:30.006 "max_cntlid": 65519, 00:23:30.006 "ana_reporting": false 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_subsystem_add_host", 00:23:30.006 "params": { 00:23:30.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.006 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.006 "psk": "key0" 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_subsystem_add_ns", 00:23:30.006 "params": { 00:23:30.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.006 "namespace": { 00:23:30.006 "nsid": 1, 00:23:30.006 "bdev_name": "malloc0", 00:23:30.006 "nguid": "CA68154B024D420CB57BCC6CC253FE99", 00:23:30.006 "uuid": "ca68154b-024d-420c-b57b-cc6cc253fe99", 00:23:30.006 "no_auto_visible": false 00:23:30.006 } 00:23:30.006 } 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "method": "nvmf_subsystem_add_listener", 00:23:30.006 "params": { 00:23:30.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.006 "listen_address": { 00:23:30.006 "trtype": "TCP", 00:23:30.006 "adrfam": "IPv4", 00:23:30.006 "traddr": "10.0.0.2", 00:23:30.006 "trsvcid": "4420" 00:23:30.006 }, 00:23:30.006 "secure_channel": true 00:23:30.006 } 00:23:30.006 } 00:23:30.006 ] 00:23:30.006 } 00:23:30.006 ] 00:23:30.006 }' 00:23:30.006 06:13:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:30.006 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:30.006 "subsystems": [ 00:23:30.006 { 00:23:30.006 "subsystem": "keyring", 00:23:30.006 "config": [ 00:23:30.006 { 00:23:30.006 "method": "keyring_file_add_key", 00:23:30.006 "params": { 00:23:30.006 "name": "key0", 00:23:30.006 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:30.006 } 00:23:30.006 } 00:23:30.006 ] 00:23:30.006 }, 00:23:30.006 { 00:23:30.006 "subsystem": "iobuf", 00:23:30.006 "config": [ 00:23:30.006 { 00:23:30.006 "method": "iobuf_set_options", 00:23:30.007 "params": { 00:23:30.007 "small_pool_count": 8192, 00:23:30.007 "large_pool_count": 1024, 00:23:30.007 "small_bufsize": 8192, 00:23:30.007 "large_bufsize": 135168, 00:23:30.007 "enable_numa": false 00:23:30.007 } 00:23:30.007 } 00:23:30.007 ] 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "subsystem": "sock", 00:23:30.007 "config": [ 00:23:30.007 { 00:23:30.007 "method": "sock_set_default_impl", 00:23:30.007 "params": { 00:23:30.007 "impl_name": "posix" 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "sock_impl_set_options", 00:23:30.007 "params": { 00:23:30.007 "impl_name": "ssl", 00:23:30.007 "recv_buf_size": 4096, 00:23:30.007 "send_buf_size": 4096, 00:23:30.007 "enable_recv_pipe": true, 00:23:30.007 "enable_quickack": false, 00:23:30.007 "enable_placement_id": 0, 00:23:30.007 "enable_zerocopy_send_server": true, 00:23:30.007 "enable_zerocopy_send_client": false, 00:23:30.007 "zerocopy_threshold": 0, 00:23:30.007 "tls_version": 0, 00:23:30.007 "enable_ktls": false 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "sock_impl_set_options", 00:23:30.007 "params": { 00:23:30.007 "impl_name": "posix", 00:23:30.007 "recv_buf_size": 2097152, 00:23:30.007 "send_buf_size": 2097152, 00:23:30.007 "enable_recv_pipe": true, 00:23:30.007 "enable_quickack": false, 00:23:30.007 "enable_placement_id": 0, 00:23:30.007 "enable_zerocopy_send_server": true, 00:23:30.007 "enable_zerocopy_send_client": false, 00:23:30.007 "zerocopy_threshold": 0, 00:23:30.007 "tls_version": 0, 00:23:30.007 "enable_ktls": false 00:23:30.007 } 00:23:30.007 } 00:23:30.007 ] 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "subsystem": "vmd", 00:23:30.007 "config": [] 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "subsystem": "accel", 00:23:30.007 "config": [ 00:23:30.007 { 00:23:30.007 "method": "accel_set_options", 00:23:30.007 "params": { 00:23:30.007 "small_cache_size": 128, 00:23:30.007 "large_cache_size": 16, 00:23:30.007 "task_count": 2048, 00:23:30.007 "sequence_count": 2048, 00:23:30.007 "buf_count": 2048 00:23:30.007 } 00:23:30.007 } 00:23:30.007 ] 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "subsystem": "bdev", 00:23:30.007 "config": [ 00:23:30.007 { 00:23:30.007 "method": "bdev_set_options", 00:23:30.007 "params": { 00:23:30.007 "bdev_io_pool_size": 65535, 00:23:30.007 "bdev_io_cache_size": 256, 00:23:30.007 "bdev_auto_examine": true, 00:23:30.007 "iobuf_small_cache_size": 128, 00:23:30.007 "iobuf_large_cache_size": 16 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_raid_set_options", 00:23:30.007 "params": { 00:23:30.007 "process_window_size_kb": 1024, 00:23:30.007 "process_max_bandwidth_mb_sec": 0 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_iscsi_set_options", 00:23:30.007 "params": { 00:23:30.007 "timeout_sec": 30 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_nvme_set_options", 00:23:30.007 "params": { 00:23:30.007 "action_on_timeout": "none", 00:23:30.007 "timeout_us": 0, 00:23:30.007 "timeout_admin_us": 0, 00:23:30.007 "keep_alive_timeout_ms": 10000, 00:23:30.007 "arbitration_burst": 0, 00:23:30.007 "low_priority_weight": 0, 00:23:30.007 "medium_priority_weight": 0, 00:23:30.007 "high_priority_weight": 0, 00:23:30.007 "nvme_adminq_poll_period_us": 10000, 00:23:30.007 "nvme_ioq_poll_period_us": 0, 00:23:30.007 "io_queue_requests": 512, 00:23:30.007 "delay_cmd_submit": true, 00:23:30.007 "transport_retry_count": 4, 00:23:30.007 "bdev_retry_count": 3, 00:23:30.007 "transport_ack_timeout": 0, 00:23:30.007 "ctrlr_loss_timeout_sec": 0, 00:23:30.007 "reconnect_delay_sec": 0, 00:23:30.007 "fast_io_fail_timeout_sec": 0, 00:23:30.007 "disable_auto_failback": false, 00:23:30.007 "generate_uuids": false, 00:23:30.007 "transport_tos": 0, 00:23:30.007 "nvme_error_stat": false, 00:23:30.007 "rdma_srq_size": 0, 00:23:30.007 "io_path_stat": false, 00:23:30.007 "allow_accel_sequence": false, 00:23:30.007 "rdma_max_cq_size": 0, 00:23:30.007 "rdma_cm_event_timeout_ms": 0, 00:23:30.007 "dhchap_digests": [ 00:23:30.007 "sha256", 00:23:30.007 "sha384", 00:23:30.007 "sha512" 00:23:30.007 ], 00:23:30.007 "dhchap_dhgroups": [ 00:23:30.007 "null", 00:23:30.007 "ffdhe2048", 00:23:30.007 "ffdhe3072", 00:23:30.007 "ffdhe4096", 00:23:30.007 "ffdhe6144", 00:23:30.007 "ffdhe8192" 00:23:30.007 ], 00:23:30.007 "rdma_umr_per_io": false 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_nvme_attach_controller", 00:23:30.007 "params": { 00:23:30.007 "name": "TLSTEST", 00:23:30.007 "trtype": "TCP", 00:23:30.007 "adrfam": "IPv4", 00:23:30.007 "traddr": "10.0.0.2", 00:23:30.007 "trsvcid": "4420", 00:23:30.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.007 "prchk_reftag": false, 00:23:30.007 "prchk_guard": false, 00:23:30.007 "ctrlr_loss_timeout_sec": 0, 00:23:30.007 "reconnect_delay_sec": 0, 00:23:30.007 "fast_io_fail_timeout_sec": 0, 00:23:30.007 "psk": "key0", 00:23:30.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.007 "hdgst": false, 00:23:30.007 "ddgst": false, 00:23:30.007 "multipath": "multipath" 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_nvme_set_hotplug", 00:23:30.007 "params": { 00:23:30.007 "period_us": 100000, 00:23:30.007 "enable": false 00:23:30.007 } 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "method": "bdev_wait_for_examine" 00:23:30.007 } 00:23:30.007 ] 00:23:30.007 }, 00:23:30.007 { 00:23:30.007 "subsystem": "nbd", 00:23:30.007 "config": [] 00:23:30.007 } 00:23:30.007 ] 00:23:30.007 }' 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1021764 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021764 ']' 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021764 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.007 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021764 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021764' 00:23:30.267 killing process with pid 1021764 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021764 00:23:30.267 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.267 00:23:30.267 Latency(us) 00:23:30.267 [2024-12-15T05:13:50.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.267 [2024-12-15T05:13:50.407Z] =================================================================================================================== 00:23:30.267 [2024-12-15T05:13:50.407Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021764 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1021443 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021443 ']' 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021443 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021443 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021443' 00:23:30.267 killing process with pid 1021443 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021443 00:23:30.267 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021443 00:23:30.526 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:30.526 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.526 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.526 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:30.526 "subsystems": [ 00:23:30.526 { 00:23:30.526 "subsystem": "keyring", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "keyring_file_add_key", 00:23:30.526 "params": { 00:23:30.526 "name": "key0", 00:23:30.526 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "iobuf", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "iobuf_set_options", 00:23:30.526 "params": { 00:23:30.526 "small_pool_count": 8192, 00:23:30.526 "large_pool_count": 1024, 00:23:30.526 "small_bufsize": 8192, 00:23:30.526 "large_bufsize": 135168, 00:23:30.526 "enable_numa": false 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "sock", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "sock_set_default_impl", 00:23:30.526 "params": { 00:23:30.526 "impl_name": "posix" 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "sock_impl_set_options", 00:23:30.526 "params": { 00:23:30.526 "impl_name": "ssl", 00:23:30.526 "recv_buf_size": 4096, 00:23:30.526 "send_buf_size": 4096, 00:23:30.526 "enable_recv_pipe": true, 00:23:30.526 "enable_quickack": false, 00:23:30.526 "enable_placement_id": 0, 00:23:30.526 "enable_zerocopy_send_server": true, 00:23:30.526 "enable_zerocopy_send_client": false, 00:23:30.526 "zerocopy_threshold": 0, 00:23:30.526 "tls_version": 0, 00:23:30.526 "enable_ktls": false 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "sock_impl_set_options", 00:23:30.526 "params": { 00:23:30.526 "impl_name": "posix", 00:23:30.526 "recv_buf_size": 2097152, 00:23:30.526 "send_buf_size": 2097152, 00:23:30.526 "enable_recv_pipe": true, 00:23:30.526 "enable_quickack": false, 00:23:30.526 "enable_placement_id": 0, 00:23:30.526 "enable_zerocopy_send_server": true, 00:23:30.526 "enable_zerocopy_send_client": false, 00:23:30.526 "zerocopy_threshold": 0, 00:23:30.526 "tls_version": 0, 00:23:30.526 "enable_ktls": false 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "vmd", 00:23:30.526 "config": [] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "accel", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "accel_set_options", 00:23:30.526 "params": { 00:23:30.526 "small_cache_size": 128, 00:23:30.526 "large_cache_size": 16, 00:23:30.526 "task_count": 2048, 00:23:30.526 "sequence_count": 2048, 00:23:30.526 "buf_count": 2048 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "bdev", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "bdev_set_options", 00:23:30.526 "params": { 00:23:30.526 "bdev_io_pool_size": 65535, 00:23:30.526 "bdev_io_cache_size": 256, 00:23:30.526 "bdev_auto_examine": true, 00:23:30.526 "iobuf_small_cache_size": 128, 00:23:30.526 "iobuf_large_cache_size": 16 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_raid_set_options", 00:23:30.526 "params": { 00:23:30.526 "process_window_size_kb": 1024, 00:23:30.526 "process_max_bandwidth_mb_sec": 0 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_iscsi_set_options", 00:23:30.526 "params": { 00:23:30.526 "timeout_sec": 30 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_nvme_set_options", 00:23:30.526 "params": { 00:23:30.526 "action_on_timeout": "none", 00:23:30.526 "timeout_us": 0, 00:23:30.526 "timeout_admin_us": 0, 00:23:30.526 "keep_alive_timeout_ms": 10000, 00:23:30.526 "arbitration_burst": 0, 00:23:30.526 "low_priority_weight": 0, 00:23:30.526 "medium_priority_weight": 0, 00:23:30.526 "high_priority_weight": 0, 00:23:30.526 "nvme_adminq_poll_period_us": 10000, 00:23:30.526 "nvme_ioq_poll_period_us": 0, 00:23:30.526 "io_queue_requests": 0, 00:23:30.526 "delay_cmd_submit": true, 00:23:30.526 "transport_retry_count": 4, 00:23:30.526 "bdev_retry_count": 3, 00:23:30.526 "transport_ack_timeout": 0, 00:23:30.526 "ctrlr_loss_timeout_sec": 0, 00:23:30.526 "reconnect_delay_sec": 0, 00:23:30.526 "fast_io_fail_timeout_sec": 0, 00:23:30.526 "disable_auto_failback": false, 00:23:30.526 "generate_uuids": false, 00:23:30.526 "transport_tos": 0, 00:23:30.526 "nvme_error_stat": false, 00:23:30.526 "rdma_srq_size": 0, 00:23:30.526 "io_path_stat": false, 00:23:30.526 "allow_accel_sequence": false, 00:23:30.526 "rdma_max_cq_size": 0, 00:23:30.526 "rdma_cm_event_timeout_ms": 0, 00:23:30.526 "dhchap_digests": [ 00:23:30.526 "sha256", 00:23:30.526 "sha384", 00:23:30.526 "sha512" 00:23:30.526 ], 00:23:30.526 "dhchap_dhgroups": [ 00:23:30.526 "null", 00:23:30.526 "ffdhe2048", 00:23:30.526 "ffdhe3072", 00:23:30.526 "ffdhe4096", 00:23:30.526 "ffdhe6144", 00:23:30.526 "ffdhe8192" 00:23:30.526 ], 00:23:30.526 "rdma_umr_per_io": false 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_nvme_set_hotplug", 00:23:30.526 "params": { 00:23:30.526 "period_us": 100000, 00:23:30.526 "enable": false 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_malloc_create", 00:23:30.526 "params": { 00:23:30.526 "name": "malloc0", 00:23:30.526 "num_blocks": 8192, 00:23:30.526 "block_size": 4096, 00:23:30.526 "physical_block_size": 4096, 00:23:30.526 "uuid": "ca68154b-024d-420c-b57b-cc6cc253fe99", 00:23:30.526 "optimal_io_boundary": 0, 00:23:30.526 "md_size": 0, 00:23:30.526 "dif_type": 0, 00:23:30.526 "dif_is_head_of_md": false, 00:23:30.526 "dif_pi_format": 0 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "bdev_wait_for_examine" 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "nbd", 00:23:30.526 "config": [] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "scheduler", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "framework_set_scheduler", 00:23:30.526 "params": { 00:23:30.526 "name": "static" 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "subsystem": "nvmf", 00:23:30.526 "config": [ 00:23:30.526 { 00:23:30.526 "method": "nvmf_set_config", 00:23:30.526 "params": { 00:23:30.526 "discovery_filter": "match_any", 00:23:30.526 "admin_cmd_passthru": { 00:23:30.526 "identify_ctrlr": false 00:23:30.526 }, 00:23:30.526 "dhchap_digests": [ 00:23:30.526 "sha256", 00:23:30.526 "sha384", 00:23:30.526 "sha512" 00:23:30.526 ], 00:23:30.526 "dhchap_dhgroups": [ 00:23:30.526 "null", 00:23:30.526 "ffdhe2048", 00:23:30.526 "ffdhe3072", 00:23:30.526 "ffdhe4096", 00:23:30.526 "ffdhe6144", 00:23:30.526 "ffdhe8192" 00:23:30.526 ] 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_set_max_subsystems", 00:23:30.526 "params": { 00:23:30.526 "max_subsystems": 1024 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_set_crdt", 00:23:30.526 "params": { 00:23:30.526 "crdt1": 0, 00:23:30.526 "crdt2": 0, 00:23:30.526 "crdt3": 0 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_create_transport", 00:23:30.526 "params": { 00:23:30.526 "trtype": "TCP", 00:23:30.526 "max_queue_depth": 128, 00:23:30.526 "max_io_qpairs_per_ctrlr": 127, 00:23:30.526 "in_capsule_data_size": 4096, 00:23:30.526 "max_io_size": 131072, 00:23:30.526 "io_unit_size": 131072, 00:23:30.526 "max_aq_depth": 128, 00:23:30.526 "num_shared_buffers": 511, 00:23:30.526 "buf_cache_size": 4294967295, 00:23:30.526 "dif_insert_or_strip": false, 00:23:30.526 "zcopy": false, 00:23:30.526 "c2h_success": false, 00:23:30.526 "sock_priority": 0, 00:23:30.526 "abort_timeout_sec": 1, 00:23:30.526 "ack_timeout": 0, 00:23:30.526 "data_wr_pool_size": 0 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_create_subsystem", 00:23:30.526 "params": { 00:23:30.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.526 "allow_any_host": false, 00:23:30.526 "serial_number": "SPDK00000000000001", 00:23:30.526 "model_number": "SPDK bdev Controller", 00:23:30.526 "max_namespaces": 10, 00:23:30.526 "min_cntlid": 1, 00:23:30.526 "max_cntlid": 65519, 00:23:30.526 "ana_reporting": false 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_subsystem_add_host", 00:23:30.526 "params": { 00:23:30.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.526 "host": "nqn.2016-06.io.spdk:host1", 00:23:30.526 "psk": "key0" 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_subsystem_add_ns", 00:23:30.526 "params": { 00:23:30.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.526 "namespace": { 00:23:30.526 "nsid": 1, 00:23:30.526 "bdev_name": "malloc0", 00:23:30.526 "nguid": "CA68154B024D420CB57BCC6CC253FE99", 00:23:30.526 "uuid": "ca68154b-024d-420c-b57b-cc6cc253fe99", 00:23:30.526 "no_auto_visible": false 00:23:30.526 } 00:23:30.526 } 00:23:30.526 }, 00:23:30.526 { 00:23:30.526 "method": "nvmf_subsystem_add_listener", 00:23:30.526 "params": { 00:23:30.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.526 "listen_address": { 00:23:30.526 "trtype": "TCP", 00:23:30.526 "adrfam": "IPv4", 00:23:30.526 "traddr": "10.0.0.2", 00:23:30.526 "trsvcid": "4420" 00:23:30.526 }, 00:23:30.526 "secure_channel": true 00:23:30.526 } 00:23:30.526 } 00:23:30.526 ] 00:23:30.526 } 00:23:30.527 ] 00:23:30.527 }' 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022016 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022016 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022016 ']' 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.527 06:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.527 [2024-12-15 06:13:50.565247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:30.527 [2024-12-15 06:13:50.565292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.527 [2024-12-15 06:13:50.638026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.527 [2024-12-15 06:13:50.658800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.527 [2024-12-15 06:13:50.658837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.527 [2024-12-15 06:13:50.658844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.527 [2024-12-15 06:13:50.658850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.527 [2024-12-15 06:13:50.658855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.527 [2024-12-15 06:13:50.659365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.785 [2024-12-15 06:13:50.866924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.785 [2024-12-15 06:13:50.898955] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.785 [2024-12-15 06:13:50.899152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1022111 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1022111 /var/tmp/bdevperf.sock 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022111 ']' 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.354 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:31.354 "subsystems": [ 00:23:31.354 { 00:23:31.354 "subsystem": "keyring", 00:23:31.354 "config": [ 00:23:31.354 { 00:23:31.354 "method": "keyring_file_add_key", 00:23:31.354 "params": { 00:23:31.354 "name": "key0", 00:23:31.354 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:31.354 } 00:23:31.354 } 00:23:31.354 ] 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "subsystem": "iobuf", 00:23:31.354 "config": [ 00:23:31.354 { 00:23:31.354 "method": "iobuf_set_options", 00:23:31.354 "params": { 00:23:31.354 "small_pool_count": 8192, 00:23:31.354 "large_pool_count": 1024, 00:23:31.354 "small_bufsize": 8192, 00:23:31.354 "large_bufsize": 135168, 00:23:31.354 "enable_numa": false 00:23:31.354 } 00:23:31.354 } 00:23:31.354 ] 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "subsystem": "sock", 00:23:31.354 "config": [ 00:23:31.354 { 00:23:31.354 "method": "sock_set_default_impl", 00:23:31.354 "params": { 00:23:31.354 "impl_name": "posix" 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "sock_impl_set_options", 00:23:31.354 "params": { 00:23:31.354 "impl_name": "ssl", 00:23:31.354 "recv_buf_size": 4096, 00:23:31.354 "send_buf_size": 4096, 00:23:31.354 "enable_recv_pipe": true, 00:23:31.354 "enable_quickack": false, 00:23:31.354 "enable_placement_id": 0, 00:23:31.354 "enable_zerocopy_send_server": true, 00:23:31.354 "enable_zerocopy_send_client": false, 00:23:31.354 "zerocopy_threshold": 0, 00:23:31.354 "tls_version": 0, 00:23:31.354 "enable_ktls": false 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "sock_impl_set_options", 00:23:31.354 "params": { 00:23:31.354 "impl_name": "posix", 00:23:31.354 "recv_buf_size": 2097152, 00:23:31.354 "send_buf_size": 2097152, 00:23:31.354 "enable_recv_pipe": true, 00:23:31.354 "enable_quickack": false, 00:23:31.354 "enable_placement_id": 0, 00:23:31.354 "enable_zerocopy_send_server": true, 00:23:31.354 "enable_zerocopy_send_client": false, 00:23:31.354 "zerocopy_threshold": 0, 00:23:31.354 "tls_version": 0, 00:23:31.354 "enable_ktls": false 00:23:31.354 } 00:23:31.354 } 00:23:31.354 ] 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "subsystem": "vmd", 00:23:31.354 "config": [] 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "subsystem": "accel", 00:23:31.354 "config": [ 00:23:31.354 { 00:23:31.354 "method": "accel_set_options", 00:23:31.354 "params": { 00:23:31.354 "small_cache_size": 128, 00:23:31.354 "large_cache_size": 16, 00:23:31.354 "task_count": 2048, 00:23:31.354 "sequence_count": 2048, 00:23:31.354 "buf_count": 2048 00:23:31.354 } 00:23:31.354 } 00:23:31.354 ] 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "subsystem": "bdev", 00:23:31.354 "config": [ 00:23:31.354 { 00:23:31.354 "method": "bdev_set_options", 00:23:31.354 "params": { 00:23:31.354 "bdev_io_pool_size": 65535, 00:23:31.354 "bdev_io_cache_size": 256, 00:23:31.354 "bdev_auto_examine": true, 00:23:31.354 "iobuf_small_cache_size": 128, 00:23:31.354 "iobuf_large_cache_size": 16 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "bdev_raid_set_options", 00:23:31.354 "params": { 00:23:31.354 "process_window_size_kb": 1024, 00:23:31.354 "process_max_bandwidth_mb_sec": 0 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "bdev_iscsi_set_options", 00:23:31.354 "params": { 00:23:31.354 "timeout_sec": 30 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "bdev_nvme_set_options", 00:23:31.354 "params": { 00:23:31.354 "action_on_timeout": "none", 00:23:31.354 "timeout_us": 0, 00:23:31.354 "timeout_admin_us": 0, 00:23:31.354 "keep_alive_timeout_ms": 10000, 00:23:31.354 "arbitration_burst": 0, 00:23:31.354 "low_priority_weight": 0, 00:23:31.354 "medium_priority_weight": 0, 00:23:31.354 "high_priority_weight": 0, 00:23:31.354 "nvme_adminq_poll_period_us": 10000, 00:23:31.354 "nvme_ioq_poll_period_us": 0, 00:23:31.354 "io_queue_requests": 512, 00:23:31.354 "delay_cmd_submit": true, 00:23:31.354 "transport_retry_count": 4, 00:23:31.354 "bdev_retry_count": 3, 00:23:31.354 "transport_ack_timeout": 0, 00:23:31.354 "ctrlr_loss_timeout_sec": 0, 00:23:31.354 "reconnect_delay_sec": 0, 00:23:31.354 "fast_io_fail_timeout_sec": 0, 00:23:31.354 "disable_auto_failback": false, 00:23:31.354 "generate_uuids": false, 00:23:31.354 "transport_tos": 0, 00:23:31.354 "nvme_error_stat": false, 00:23:31.354 "rdma_srq_size": 0, 00:23:31.354 "io_path_stat": false, 00:23:31.354 "allow_accel_sequence": false, 00:23:31.354 "rdma_max_cq_size": 0, 00:23:31.354 "rdma_cm_event_timeout_ms": 0, 00:23:31.354 "dhchap_digests": [ 00:23:31.354 "sha256", 00:23:31.354 "sha384", 00:23:31.354 "sha512" 00:23:31.354 ], 00:23:31.354 "dhchap_dhgroups": [ 00:23:31.354 "null", 00:23:31.354 "ffdhe2048", 00:23:31.354 "ffdhe3072", 00:23:31.354 "ffdhe4096", 00:23:31.354 "ffdhe6144", 00:23:31.354 "ffdhe8192" 00:23:31.354 ], 00:23:31.354 "rdma_umr_per_io": false 00:23:31.354 } 00:23:31.354 }, 00:23:31.354 { 00:23:31.354 "method": "bdev_nvme_attach_controller", 00:23:31.354 "params": { 00:23:31.354 "name": "TLSTEST", 00:23:31.354 "trtype": "TCP", 00:23:31.354 "adrfam": "IPv4", 00:23:31.354 "traddr": "10.0.0.2", 00:23:31.354 "trsvcid": "4420", 00:23:31.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.354 "prchk_reftag": false, 00:23:31.354 "prchk_guard": false, 00:23:31.354 "ctrlr_loss_timeout_sec": 0, 00:23:31.354 "reconnect_delay_sec": 0, 00:23:31.354 "fast_io_fail_timeout_sec": 0, 00:23:31.354 "psk": "key0", 00:23:31.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.354 "hdgst": false, 00:23:31.355 "ddgst": false, 00:23:31.355 "multipath": "multipath" 00:23:31.355 } 00:23:31.355 }, 00:23:31.355 { 00:23:31.355 "method": "bdev_nvme_set_hotplug", 00:23:31.355 "params": { 00:23:31.355 "period_us": 100000, 00:23:31.355 "enable": false 00:23:31.355 } 00:23:31.355 }, 00:23:31.355 { 00:23:31.355 "method": "bdev_wait_for_examine" 00:23:31.355 } 00:23:31.355 ] 00:23:31.355 }, 00:23:31.355 { 00:23:31.355 "subsystem": "nbd", 00:23:31.355 "config": [] 00:23:31.355 } 00:23:31.355 ] 00:23:31.355 }' 00:23:31.355 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.355 06:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.355 [2024-12-15 06:13:51.483592] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:31.355 [2024-12-15 06:13:51.483639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022111 ] 00:23:31.614 [2024-12-15 06:13:51.557390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.614 [2024-12-15 06:13:51.579925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.614 [2024-12-15 06:13:51.727554] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.552 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.552 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.552 06:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.552 Running I/O for 10 seconds... 00:23:34.426 5466.00 IOPS, 21.35 MiB/s [2024-12-15T05:13:55.503Z] 5552.50 IOPS, 21.69 MiB/s [2024-12-15T05:13:56.441Z] 5543.33 IOPS, 21.65 MiB/s [2024-12-15T05:13:57.820Z] 5555.75 IOPS, 21.70 MiB/s [2024-12-15T05:13:58.757Z] 5472.40 IOPS, 21.38 MiB/s [2024-12-15T05:13:59.694Z] 5379.00 IOPS, 21.01 MiB/s [2024-12-15T05:14:00.631Z] 5316.57 IOPS, 20.77 MiB/s [2024-12-15T05:14:01.567Z] 5278.12 IOPS, 20.62 MiB/s [2024-12-15T05:14:02.504Z] 5309.89 IOPS, 20.74 MiB/s [2024-12-15T05:14:02.504Z] 5339.30 IOPS, 20.86 MiB/s 00:23:42.364 Latency(us) 00:23:42.364 [2024-12-15T05:14:02.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.364 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:42.364 Verification LBA range: start 0x0 length 0x2000 00:23:42.364 TLSTESTn1 : 10.02 5343.03 20.87 0.00 0.00 23920.14 5742.20 30333.81 00:23:42.364 [2024-12-15T05:14:02.504Z] =================================================================================================================== 00:23:42.364 [2024-12-15T05:14:02.504Z] Total : 5343.03 20.87 0.00 0.00 23920.14 5742.20 30333.81 00:23:42.364 { 00:23:42.364 "results": [ 00:23:42.364 { 00:23:42.364 "job": "TLSTESTn1", 00:23:42.364 "core_mask": "0x4", 00:23:42.364 "workload": "verify", 00:23:42.364 "status": "finished", 00:23:42.364 "verify_range": { 00:23:42.364 "start": 0, 00:23:42.364 "length": 8192 00:23:42.364 }, 00:23:42.364 "queue_depth": 128, 00:23:42.364 "io_size": 4096, 00:23:42.364 "runtime": 10.016781, 00:23:42.364 "iops": 5343.033854888112, 00:23:42.364 "mibps": 20.871225995656687, 00:23:42.364 "io_failed": 0, 00:23:42.364 "io_timeout": 0, 00:23:42.364 "avg_latency_us": 23920.143780197875, 00:23:42.364 "min_latency_us": 5742.201904761905, 00:23:42.364 "max_latency_us": 30333.805714285714 00:23:42.364 } 00:23:42.364 ], 00:23:42.364 "core_count": 1 00:23:42.364 } 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1022111 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022111 ']' 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022111 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.364 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022111 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022111' 00:23:42.623 killing process with pid 1022111 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022111 00:23:42.623 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.623 00:23:42.623 Latency(us) 00:23:42.623 [2024-12-15T05:14:02.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.623 [2024-12-15T05:14:02.763Z] =================================================================================================================== 00:23:42.623 [2024-12-15T05:14:02.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022111 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1022016 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022016 ']' 00:23:42.623 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022016 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022016 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022016' 00:23:42.624 killing process with pid 1022016 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022016 00:23:42.624 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022016 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1023960 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1023960 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1023960 ']' 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.883 06:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.883 [2024-12-15 06:14:02.948233] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:42.883 [2024-12-15 06:14:02.948285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.142 [2024-12-15 06:14:03.026537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.142 [2024-12-15 06:14:03.047129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.142 [2024-12-15 06:14:03.047167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.142 [2024-12-15 06:14:03.047175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.142 [2024-12-15 06:14:03.047183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.142 [2024-12-15 06:14:03.047189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.142 [2024-12-15 06:14:03.047649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z0VrvUvZQZ 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z0VrvUvZQZ 00:23:43.142 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.400 [2024-12-15 06:14:03.358246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.400 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.659 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.659 [2024-12-15 06:14:03.723176] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.659 [2024-12-15 06:14:03.723385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.659 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:43.918 malloc0 00:23:43.918 06:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.178 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:44.178 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1024299 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1024299 /var/tmp/bdevperf.sock 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024299 ']' 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.437 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.437 [2024-12-15 06:14:04.527165] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:44.437 [2024-12-15 06:14:04.527212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024299 ] 00:23:44.696 [2024-12-15 06:14:04.601788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.696 [2024-12-15 06:14:04.623598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.696 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.696 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.696 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:44.954 06:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:44.954 [2024-12-15 06:14:05.086728] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.212 nvme0n1 00:23:45.212 06:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.212 Running I/O for 1 seconds... 00:23:46.146 5210.00 IOPS, 20.35 MiB/s 00:23:46.146 Latency(us) 00:23:46.146 [2024-12-15T05:14:06.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.146 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:46.146 Verification LBA range: start 0x0 length 0x2000 00:23:46.146 nvme0n1 : 1.02 5248.44 20.50 0.00 0.00 24210.97 4805.97 26838.55 00:23:46.146 [2024-12-15T05:14:06.286Z] =================================================================================================================== 00:23:46.146 [2024-12-15T05:14:06.286Z] Total : 5248.44 20.50 0.00 0.00 24210.97 4805.97 26838.55 00:23:46.405 { 00:23:46.405 "results": [ 00:23:46.405 { 00:23:46.405 "job": "nvme0n1", 00:23:46.405 "core_mask": "0x2", 00:23:46.405 "workload": "verify", 00:23:46.405 "status": "finished", 00:23:46.405 "verify_range": { 00:23:46.405 "start": 0, 00:23:46.405 "length": 8192 00:23:46.405 }, 00:23:46.405 "queue_depth": 128, 00:23:46.405 "io_size": 4096, 00:23:46.405 "runtime": 1.017255, 00:23:46.405 "iops": 5248.438198878354, 00:23:46.405 "mibps": 20.50171171436857, 00:23:46.405 "io_failed": 0, 00:23:46.405 "io_timeout": 0, 00:23:46.405 "avg_latency_us": 24210.9661341967, 00:23:46.405 "min_latency_us": 4805.973333333333, 00:23:46.405 "max_latency_us": 26838.55238095238 00:23:46.405 } 00:23:46.405 ], 00:23:46.405 "core_count": 1 00:23:46.405 } 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1024299 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024299 ']' 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024299 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024299 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024299' 00:23:46.405 killing process with pid 1024299 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024299 00:23:46.405 Received shutdown signal, test time was about 1.000000 seconds 00:23:46.405 00:23:46.405 Latency(us) 00:23:46.405 [2024-12-15T05:14:06.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.405 [2024-12-15T05:14:06.545Z] =================================================================================================================== 00:23:46.405 [2024-12-15T05:14:06.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024299 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1023960 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1023960 ']' 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1023960 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.405 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023960 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023960' 00:23:46.664 killing process with pid 1023960 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1023960 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1023960 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024546 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024546 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024546 ']' 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.664 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.664 [2024-12-15 06:14:06.779019] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:46.664 [2024-12-15 06:14:06.779070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.923 [2024-12-15 06:14:06.854196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.923 [2024-12-15 06:14:06.875081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.923 [2024-12-15 06:14:06.875118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.923 [2024-12-15 06:14:06.875125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.923 [2024-12-15 06:14:06.875131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.923 [2024-12-15 06:14:06.875137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.923 [2024-12-15 06:14:06.875625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.923 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.923 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.923 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.923 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.923 06:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.923 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.923 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:46.923 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.923 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.923 [2024-12-15 06:14:07.014731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.923 malloc0 00:23:46.923 [2024-12-15 06:14:07.042854] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.923 [2024-12-15 06:14:07.043076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1024667 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1024667 /var/tmp/bdevperf.sock 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024667 ']' 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.182 [2024-12-15 06:14:07.117846] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:47.182 [2024-12-15 06:14:07.117892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024667 ] 00:23:47.182 [2024-12-15 06:14:07.194621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.182 [2024-12-15 06:14:07.217219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:47.182 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z0VrvUvZQZ 00:23:47.441 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:47.699 [2024-12-15 06:14:07.672665] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.699 nvme0n1 00:23:47.699 06:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:47.957 Running I/O for 1 seconds... 00:23:48.893 5328.00 IOPS, 20.81 MiB/s 00:23:48.893 Latency(us) 00:23:48.893 [2024-12-15T05:14:09.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.893 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:48.893 Verification LBA range: start 0x0 length 0x2000 00:23:48.893 nvme0n1 : 1.01 5377.68 21.01 0.00 0.00 23628.33 5211.67 23592.96 00:23:48.893 [2024-12-15T05:14:09.033Z] =================================================================================================================== 00:23:48.893 [2024-12-15T05:14:09.033Z] Total : 5377.68 21.01 0.00 0.00 23628.33 5211.67 23592.96 00:23:48.894 { 00:23:48.894 "results": [ 00:23:48.894 { 00:23:48.894 "job": "nvme0n1", 00:23:48.894 "core_mask": "0x2", 00:23:48.894 "workload": "verify", 00:23:48.894 "status": "finished", 00:23:48.894 "verify_range": { 00:23:48.894 "start": 0, 00:23:48.894 "length": 8192 00:23:48.894 }, 00:23:48.894 "queue_depth": 128, 00:23:48.894 "io_size": 4096, 00:23:48.894 "runtime": 1.014563, 00:23:48.894 "iops": 5377.684776598398, 00:23:48.894 "mibps": 21.00658115858749, 00:23:48.894 "io_failed": 0, 00:23:48.894 "io_timeout": 0, 00:23:48.894 "avg_latency_us": 23628.328825583016, 00:23:48.894 "min_latency_us": 5211.672380952381, 00:23:48.894 "max_latency_us": 23592.96 00:23:48.894 } 00:23:48.894 ], 00:23:48.894 "core_count": 1 00:23:48.894 } 00:23:48.894 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:48.894 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.894 06:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.894 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.894 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:48.894 "subsystems": [ 00:23:48.894 { 00:23:48.894 "subsystem": "keyring", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "keyring_file_add_key", 00:23:48.894 "params": { 00:23:48.894 "name": "key0", 00:23:48.894 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:48.894 } 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "iobuf", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "iobuf_set_options", 00:23:48.894 "params": { 00:23:48.894 "small_pool_count": 8192, 00:23:48.894 "large_pool_count": 1024, 00:23:48.894 "small_bufsize": 8192, 00:23:48.894 "large_bufsize": 135168, 00:23:48.894 "enable_numa": false 00:23:48.894 } 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "sock", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "sock_set_default_impl", 00:23:48.894 "params": { 00:23:48.894 "impl_name": "posix" 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "sock_impl_set_options", 00:23:48.894 "params": { 00:23:48.894 "impl_name": "ssl", 00:23:48.894 "recv_buf_size": 4096, 00:23:48.894 "send_buf_size": 4096, 00:23:48.894 "enable_recv_pipe": true, 00:23:48.894 "enable_quickack": false, 00:23:48.894 "enable_placement_id": 0, 00:23:48.894 "enable_zerocopy_send_server": true, 00:23:48.894 "enable_zerocopy_send_client": false, 00:23:48.894 "zerocopy_threshold": 0, 00:23:48.894 "tls_version": 0, 00:23:48.894 "enable_ktls": false 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "sock_impl_set_options", 00:23:48.894 "params": { 00:23:48.894 "impl_name": "posix", 00:23:48.894 "recv_buf_size": 2097152, 00:23:48.894 "send_buf_size": 2097152, 00:23:48.894 "enable_recv_pipe": true, 00:23:48.894 "enable_quickack": false, 00:23:48.894 "enable_placement_id": 0, 00:23:48.894 "enable_zerocopy_send_server": true, 00:23:48.894 "enable_zerocopy_send_client": false, 00:23:48.894 "zerocopy_threshold": 0, 00:23:48.894 "tls_version": 0, 00:23:48.894 "enable_ktls": false 00:23:48.894 } 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "vmd", 00:23:48.894 "config": [] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "accel", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "accel_set_options", 00:23:48.894 "params": { 00:23:48.894 "small_cache_size": 128, 00:23:48.894 "large_cache_size": 16, 00:23:48.894 "task_count": 2048, 00:23:48.894 "sequence_count": 2048, 00:23:48.894 "buf_count": 2048 00:23:48.894 } 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "bdev", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "bdev_set_options", 00:23:48.894 "params": { 00:23:48.894 "bdev_io_pool_size": 65535, 00:23:48.894 "bdev_io_cache_size": 256, 00:23:48.894 "bdev_auto_examine": true, 00:23:48.894 "iobuf_small_cache_size": 128, 00:23:48.894 "iobuf_large_cache_size": 16 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_raid_set_options", 00:23:48.894 "params": { 00:23:48.894 "process_window_size_kb": 1024, 00:23:48.894 "process_max_bandwidth_mb_sec": 0 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_iscsi_set_options", 00:23:48.894 "params": { 00:23:48.894 "timeout_sec": 30 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_nvme_set_options", 00:23:48.894 "params": { 00:23:48.894 "action_on_timeout": "none", 00:23:48.894 "timeout_us": 0, 00:23:48.894 "timeout_admin_us": 0, 00:23:48.894 "keep_alive_timeout_ms": 10000, 00:23:48.894 "arbitration_burst": 0, 00:23:48.894 "low_priority_weight": 0, 00:23:48.894 "medium_priority_weight": 0, 00:23:48.894 "high_priority_weight": 0, 00:23:48.894 "nvme_adminq_poll_period_us": 10000, 00:23:48.894 "nvme_ioq_poll_period_us": 0, 00:23:48.894 "io_queue_requests": 0, 00:23:48.894 "delay_cmd_submit": true, 00:23:48.894 "transport_retry_count": 4, 00:23:48.894 "bdev_retry_count": 3, 00:23:48.894 "transport_ack_timeout": 0, 00:23:48.894 "ctrlr_loss_timeout_sec": 0, 00:23:48.894 "reconnect_delay_sec": 0, 00:23:48.894 "fast_io_fail_timeout_sec": 0, 00:23:48.894 "disable_auto_failback": false, 00:23:48.894 "generate_uuids": false, 00:23:48.894 "transport_tos": 0, 00:23:48.894 "nvme_error_stat": false, 00:23:48.894 "rdma_srq_size": 0, 00:23:48.894 "io_path_stat": false, 00:23:48.894 "allow_accel_sequence": false, 00:23:48.894 "rdma_max_cq_size": 0, 00:23:48.894 "rdma_cm_event_timeout_ms": 0, 00:23:48.894 "dhchap_digests": [ 00:23:48.894 "sha256", 00:23:48.894 "sha384", 00:23:48.894 "sha512" 00:23:48.894 ], 00:23:48.894 "dhchap_dhgroups": [ 00:23:48.894 "null", 00:23:48.894 "ffdhe2048", 00:23:48.894 "ffdhe3072", 00:23:48.894 "ffdhe4096", 00:23:48.894 "ffdhe6144", 00:23:48.894 "ffdhe8192" 00:23:48.894 ], 00:23:48.894 "rdma_umr_per_io": false 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_nvme_set_hotplug", 00:23:48.894 "params": { 00:23:48.894 "period_us": 100000, 00:23:48.894 "enable": false 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_malloc_create", 00:23:48.894 "params": { 00:23:48.894 "name": "malloc0", 00:23:48.894 "num_blocks": 8192, 00:23:48.894 "block_size": 4096, 00:23:48.894 "physical_block_size": 4096, 00:23:48.894 "uuid": "8797fd6f-eceb-4a48-9c3b-fe05d39ea2b5", 00:23:48.894 "optimal_io_boundary": 0, 00:23:48.894 "md_size": 0, 00:23:48.894 "dif_type": 0, 00:23:48.894 "dif_is_head_of_md": false, 00:23:48.894 "dif_pi_format": 0 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "bdev_wait_for_examine" 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "nbd", 00:23:48.894 "config": [] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "scheduler", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "framework_set_scheduler", 00:23:48.894 "params": { 00:23:48.894 "name": "static" 00:23:48.894 } 00:23:48.894 } 00:23:48.894 ] 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "subsystem": "nvmf", 00:23:48.894 "config": [ 00:23:48.894 { 00:23:48.894 "method": "nvmf_set_config", 00:23:48.894 "params": { 00:23:48.894 "discovery_filter": "match_any", 00:23:48.894 "admin_cmd_passthru": { 00:23:48.894 "identify_ctrlr": false 00:23:48.894 }, 00:23:48.894 "dhchap_digests": [ 00:23:48.894 "sha256", 00:23:48.894 "sha384", 00:23:48.894 "sha512" 00:23:48.894 ], 00:23:48.894 "dhchap_dhgroups": [ 00:23:48.894 "null", 00:23:48.894 "ffdhe2048", 00:23:48.894 "ffdhe3072", 00:23:48.894 "ffdhe4096", 00:23:48.894 "ffdhe6144", 00:23:48.894 "ffdhe8192" 00:23:48.894 ] 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "nvmf_set_max_subsystems", 00:23:48.894 "params": { 00:23:48.894 "max_subsystems": 1024 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "nvmf_set_crdt", 00:23:48.894 "params": { 00:23:48.894 "crdt1": 0, 00:23:48.894 "crdt2": 0, 00:23:48.894 "crdt3": 0 00:23:48.894 } 00:23:48.894 }, 00:23:48.894 { 00:23:48.894 "method": "nvmf_create_transport", 00:23:48.894 "params": { 00:23:48.894 "trtype": "TCP", 00:23:48.894 "max_queue_depth": 128, 00:23:48.894 "max_io_qpairs_per_ctrlr": 127, 00:23:48.894 "in_capsule_data_size": 4096, 00:23:48.894 "max_io_size": 131072, 00:23:48.894 "io_unit_size": 131072, 00:23:48.895 "max_aq_depth": 128, 00:23:48.895 "num_shared_buffers": 511, 00:23:48.895 "buf_cache_size": 4294967295, 00:23:48.895 "dif_insert_or_strip": false, 00:23:48.895 "zcopy": false, 00:23:48.895 "c2h_success": false, 00:23:48.895 "sock_priority": 0, 00:23:48.895 "abort_timeout_sec": 1, 00:23:48.895 "ack_timeout": 0, 00:23:48.895 "data_wr_pool_size": 0 00:23:48.895 } 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "method": "nvmf_create_subsystem", 00:23:48.895 "params": { 00:23:48.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.895 "allow_any_host": false, 00:23:48.895 "serial_number": "00000000000000000000", 00:23:48.895 "model_number": "SPDK bdev Controller", 00:23:48.895 "max_namespaces": 32, 00:23:48.895 "min_cntlid": 1, 00:23:48.895 "max_cntlid": 65519, 00:23:48.895 "ana_reporting": false 00:23:48.895 } 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "method": "nvmf_subsystem_add_host", 00:23:48.895 "params": { 00:23:48.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.895 "host": "nqn.2016-06.io.spdk:host1", 00:23:48.895 "psk": "key0" 00:23:48.895 } 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "method": "nvmf_subsystem_add_ns", 00:23:48.895 "params": { 00:23:48.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.895 "namespace": { 00:23:48.895 "nsid": 1, 00:23:48.895 "bdev_name": "malloc0", 00:23:48.895 "nguid": "8797FD6FECEB4A489C3BFE05D39EA2B5", 00:23:48.895 "uuid": "8797fd6f-eceb-4a48-9c3b-fe05d39ea2b5", 00:23:48.895 "no_auto_visible": false 00:23:48.895 } 00:23:48.895 } 00:23:48.895 }, 00:23:48.895 { 00:23:48.895 "method": "nvmf_subsystem_add_listener", 00:23:48.895 "params": { 00:23:48.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.895 "listen_address": { 00:23:48.895 "trtype": "TCP", 00:23:48.895 "adrfam": "IPv4", 00:23:48.895 "traddr": "10.0.0.2", 00:23:48.895 "trsvcid": "4420" 00:23:48.895 }, 00:23:48.895 "secure_channel": false, 00:23:48.895 "sock_impl": "ssl" 00:23:48.895 } 00:23:48.895 } 00:23:48.895 ] 00:23:48.895 } 00:23:48.895 ] 00:23:48.895 }' 00:23:48.895 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:49.154 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:49.154 "subsystems": [ 00:23:49.154 { 00:23:49.154 "subsystem": "keyring", 00:23:49.154 "config": [ 00:23:49.154 { 00:23:49.154 "method": "keyring_file_add_key", 00:23:49.154 "params": { 00:23:49.154 "name": "key0", 00:23:49.154 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:49.154 } 00:23:49.154 } 00:23:49.154 ] 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "subsystem": "iobuf", 00:23:49.154 "config": [ 00:23:49.154 { 00:23:49.154 "method": "iobuf_set_options", 00:23:49.154 "params": { 00:23:49.154 "small_pool_count": 8192, 00:23:49.154 "large_pool_count": 1024, 00:23:49.154 "small_bufsize": 8192, 00:23:49.154 "large_bufsize": 135168, 00:23:49.154 "enable_numa": false 00:23:49.154 } 00:23:49.154 } 00:23:49.154 ] 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "subsystem": "sock", 00:23:49.154 "config": [ 00:23:49.154 { 00:23:49.154 "method": "sock_set_default_impl", 00:23:49.154 "params": { 00:23:49.154 "impl_name": "posix" 00:23:49.154 } 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "method": "sock_impl_set_options", 00:23:49.154 "params": { 00:23:49.154 "impl_name": "ssl", 00:23:49.154 "recv_buf_size": 4096, 00:23:49.154 "send_buf_size": 4096, 00:23:49.154 "enable_recv_pipe": true, 00:23:49.154 "enable_quickack": false, 00:23:49.154 "enable_placement_id": 0, 00:23:49.154 "enable_zerocopy_send_server": true, 00:23:49.154 "enable_zerocopy_send_client": false, 00:23:49.154 "zerocopy_threshold": 0, 00:23:49.154 "tls_version": 0, 00:23:49.154 "enable_ktls": false 00:23:49.154 } 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "method": "sock_impl_set_options", 00:23:49.154 "params": { 00:23:49.154 "impl_name": "posix", 00:23:49.154 "recv_buf_size": 2097152, 00:23:49.154 "send_buf_size": 2097152, 00:23:49.154 "enable_recv_pipe": true, 00:23:49.154 "enable_quickack": false, 00:23:49.154 "enable_placement_id": 0, 00:23:49.154 "enable_zerocopy_send_server": true, 00:23:49.154 "enable_zerocopy_send_client": false, 00:23:49.154 "zerocopy_threshold": 0, 00:23:49.154 "tls_version": 0, 00:23:49.154 "enable_ktls": false 00:23:49.154 } 00:23:49.154 } 00:23:49.154 ] 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "subsystem": "vmd", 00:23:49.154 "config": [] 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "subsystem": "accel", 00:23:49.154 "config": [ 00:23:49.154 { 00:23:49.154 "method": "accel_set_options", 00:23:49.154 "params": { 00:23:49.154 "small_cache_size": 128, 00:23:49.154 "large_cache_size": 16, 00:23:49.154 "task_count": 2048, 00:23:49.154 "sequence_count": 2048, 00:23:49.154 "buf_count": 2048 00:23:49.154 } 00:23:49.154 } 00:23:49.154 ] 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "subsystem": "bdev", 00:23:49.154 "config": [ 00:23:49.154 { 00:23:49.154 "method": "bdev_set_options", 00:23:49.154 "params": { 00:23:49.154 "bdev_io_pool_size": 65535, 00:23:49.154 "bdev_io_cache_size": 256, 00:23:49.154 "bdev_auto_examine": true, 00:23:49.154 "iobuf_small_cache_size": 128, 00:23:49.154 "iobuf_large_cache_size": 16 00:23:49.154 } 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "method": "bdev_raid_set_options", 00:23:49.154 "params": { 00:23:49.154 "process_window_size_kb": 1024, 00:23:49.154 "process_max_bandwidth_mb_sec": 0 00:23:49.154 } 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "method": "bdev_iscsi_set_options", 00:23:49.154 "params": { 00:23:49.154 "timeout_sec": 30 00:23:49.154 } 00:23:49.154 }, 00:23:49.154 { 00:23:49.154 "method": "bdev_nvme_set_options", 00:23:49.154 "params": { 00:23:49.154 "action_on_timeout": "none", 00:23:49.154 "timeout_us": 0, 00:23:49.154 "timeout_admin_us": 0, 00:23:49.154 "keep_alive_timeout_ms": 10000, 00:23:49.154 "arbitration_burst": 0, 00:23:49.154 "low_priority_weight": 0, 00:23:49.154 "medium_priority_weight": 0, 00:23:49.154 "high_priority_weight": 0, 00:23:49.154 "nvme_adminq_poll_period_us": 10000, 00:23:49.154 "nvme_ioq_poll_period_us": 0, 00:23:49.154 "io_queue_requests": 512, 00:23:49.154 "delay_cmd_submit": true, 00:23:49.154 "transport_retry_count": 4, 00:23:49.154 "bdev_retry_count": 3, 00:23:49.154 "transport_ack_timeout": 0, 00:23:49.154 "ctrlr_loss_timeout_sec": 0, 00:23:49.154 "reconnect_delay_sec": 0, 00:23:49.155 "fast_io_fail_timeout_sec": 0, 00:23:49.155 "disable_auto_failback": false, 00:23:49.155 "generate_uuids": false, 00:23:49.155 "transport_tos": 0, 00:23:49.155 "nvme_error_stat": false, 00:23:49.155 "rdma_srq_size": 0, 00:23:49.155 "io_path_stat": false, 00:23:49.155 "allow_accel_sequence": false, 00:23:49.155 "rdma_max_cq_size": 0, 00:23:49.155 "rdma_cm_event_timeout_ms": 0, 00:23:49.155 "dhchap_digests": [ 00:23:49.155 "sha256", 00:23:49.155 "sha384", 00:23:49.155 "sha512" 00:23:49.155 ], 00:23:49.155 "dhchap_dhgroups": [ 00:23:49.155 "null", 00:23:49.155 "ffdhe2048", 00:23:49.155 "ffdhe3072", 00:23:49.155 "ffdhe4096", 00:23:49.155 "ffdhe6144", 00:23:49.155 "ffdhe8192" 00:23:49.155 ], 00:23:49.155 "rdma_umr_per_io": false 00:23:49.155 } 00:23:49.155 }, 00:23:49.155 { 00:23:49.155 "method": "bdev_nvme_attach_controller", 00:23:49.155 "params": { 00:23:49.155 "name": "nvme0", 00:23:49.155 "trtype": "TCP", 00:23:49.155 "adrfam": "IPv4", 00:23:49.155 "traddr": "10.0.0.2", 00:23:49.155 "trsvcid": "4420", 00:23:49.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.155 "prchk_reftag": false, 00:23:49.155 "prchk_guard": false, 00:23:49.155 "ctrlr_loss_timeout_sec": 0, 00:23:49.155 "reconnect_delay_sec": 0, 00:23:49.155 "fast_io_fail_timeout_sec": 0, 00:23:49.155 "psk": "key0", 00:23:49.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.155 "hdgst": false, 00:23:49.155 "ddgst": false, 00:23:49.155 "multipath": "multipath" 00:23:49.155 } 00:23:49.155 }, 00:23:49.155 { 00:23:49.155 "method": "bdev_nvme_set_hotplug", 00:23:49.155 "params": { 00:23:49.155 "period_us": 100000, 00:23:49.155 "enable": false 00:23:49.155 } 00:23:49.155 }, 00:23:49.155 { 00:23:49.155 "method": "bdev_enable_histogram", 00:23:49.155 "params": { 00:23:49.155 "name": "nvme0n1", 00:23:49.155 "enable": true 00:23:49.155 } 00:23:49.155 }, 00:23:49.155 { 00:23:49.155 "method": "bdev_wait_for_examine" 00:23:49.155 } 00:23:49.155 ] 00:23:49.155 }, 00:23:49.155 { 00:23:49.155 "subsystem": "nbd", 00:23:49.155 "config": [] 00:23:49.155 } 00:23:49.155 ] 00:23:49.155 }' 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1024667 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024667 ']' 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024667 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.155 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024667 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024667' 00:23:49.414 killing process with pid 1024667 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024667 00:23:49.414 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.414 00:23:49.414 Latency(us) 00:23:49.414 [2024-12-15T05:14:09.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.414 [2024-12-15T05:14:09.554Z] =================================================================================================================== 00:23:49.414 [2024-12-15T05:14:09.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024667 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1024546 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024546 ']' 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024546 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024546 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024546' 00:23:49.414 killing process with pid 1024546 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024546 00:23:49.414 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024546 00:23:49.673 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:49.673 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.673 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:49.673 "subsystems": [ 00:23:49.673 { 00:23:49.673 "subsystem": "keyring", 00:23:49.673 "config": [ 00:23:49.673 { 00:23:49.673 "method": "keyring_file_add_key", 00:23:49.673 "params": { 00:23:49.673 "name": "key0", 00:23:49.673 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:49.673 } 00:23:49.673 } 00:23:49.673 ] 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "subsystem": "iobuf", 00:23:49.673 "config": [ 00:23:49.673 { 00:23:49.673 "method": "iobuf_set_options", 00:23:49.673 "params": { 00:23:49.673 "small_pool_count": 8192, 00:23:49.673 "large_pool_count": 1024, 00:23:49.673 "small_bufsize": 8192, 00:23:49.673 "large_bufsize": 135168, 00:23:49.673 "enable_numa": false 00:23:49.673 } 00:23:49.673 } 00:23:49.673 ] 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "subsystem": "sock", 00:23:49.673 "config": [ 00:23:49.673 { 00:23:49.673 "method": "sock_set_default_impl", 00:23:49.673 "params": { 00:23:49.673 "impl_name": "posix" 00:23:49.673 } 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "method": "sock_impl_set_options", 00:23:49.673 "params": { 00:23:49.673 "impl_name": "ssl", 00:23:49.673 "recv_buf_size": 4096, 00:23:49.673 "send_buf_size": 4096, 00:23:49.673 "enable_recv_pipe": true, 00:23:49.673 "enable_quickack": false, 00:23:49.673 "enable_placement_id": 0, 00:23:49.673 "enable_zerocopy_send_server": true, 00:23:49.673 "enable_zerocopy_send_client": false, 00:23:49.673 "zerocopy_threshold": 0, 00:23:49.673 "tls_version": 0, 00:23:49.673 "enable_ktls": false 00:23:49.673 } 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "method": "sock_impl_set_options", 00:23:49.673 "params": { 00:23:49.673 "impl_name": "posix", 00:23:49.673 "recv_buf_size": 2097152, 00:23:49.673 "send_buf_size": 2097152, 00:23:49.673 "enable_recv_pipe": true, 00:23:49.673 "enable_quickack": false, 00:23:49.673 "enable_placement_id": 0, 00:23:49.673 "enable_zerocopy_send_server": true, 00:23:49.673 "enable_zerocopy_send_client": false, 00:23:49.673 "zerocopy_threshold": 0, 00:23:49.673 "tls_version": 0, 00:23:49.673 "enable_ktls": false 00:23:49.673 } 00:23:49.673 } 00:23:49.673 ] 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "subsystem": "vmd", 00:23:49.673 "config": [] 00:23:49.673 }, 00:23:49.673 { 00:23:49.673 "subsystem": "accel", 00:23:49.673 "config": [ 00:23:49.673 { 00:23:49.673 "method": "accel_set_options", 00:23:49.673 "params": { 00:23:49.673 "small_cache_size": 128, 00:23:49.673 "large_cache_size": 16, 00:23:49.673 "task_count": 2048, 00:23:49.673 "sequence_count": 2048, 00:23:49.673 "buf_count": 2048 00:23:49.674 } 00:23:49.674 } 00:23:49.674 ] 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "subsystem": "bdev", 00:23:49.674 "config": [ 00:23:49.674 { 00:23:49.674 "method": "bdev_set_options", 00:23:49.674 "params": { 00:23:49.674 "bdev_io_pool_size": 65535, 00:23:49.674 "bdev_io_cache_size": 256, 00:23:49.674 "bdev_auto_examine": true, 00:23:49.674 "iobuf_small_cache_size": 128, 00:23:49.674 "iobuf_large_cache_size": 16 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_raid_set_options", 00:23:49.674 "params": { 00:23:49.674 "process_window_size_kb": 1024, 00:23:49.674 "process_max_bandwidth_mb_sec": 0 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_iscsi_set_options", 00:23:49.674 "params": { 00:23:49.674 "timeout_sec": 30 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_nvme_set_options", 00:23:49.674 "params": { 00:23:49.674 "action_on_timeout": "none", 00:23:49.674 "timeout_us": 0, 00:23:49.674 "timeout_admin_us": 0, 00:23:49.674 "keep_alive_timeout_ms": 10000, 00:23:49.674 "arbitration_burst": 0, 00:23:49.674 "low_priority_weight": 0, 00:23:49.674 "medium_priority_weight": 0, 00:23:49.674 "high_priority_weight": 0, 00:23:49.674 "nvme_adminq_poll_period_us": 10000, 00:23:49.674 "nvme_ioq_poll_period_us": 0, 00:23:49.674 "io_queue_requests": 0, 00:23:49.674 "delay_cmd_submit": true, 00:23:49.674 "transport_retry_count": 4, 00:23:49.674 "bdev_retry_count": 3, 00:23:49.674 "transport_ack_timeout": 0, 00:23:49.674 "ctrlr_loss_timeout_sec": 0, 00:23:49.674 "reconnect_delay_sec": 0, 00:23:49.674 "fast_io_fail_timeout_sec": 0, 00:23:49.674 "disable_auto_failback": false, 00:23:49.674 "generate_uuids": false, 00:23:49.674 "transport_tos": 0, 00:23:49.674 "nvme_error_stat": false, 00:23:49.674 "rdma_srq_size": 0, 00:23:49.674 "io_path_stat": false, 00:23:49.674 "allow_accel_sequence": false, 00:23:49.674 "rdma_max_cq_size": 0, 00:23:49.674 "rdma_cm_event_timeout_ms": 0, 00:23:49.674 "dhchap_digests": [ 00:23:49.674 "sha256", 00:23:49.674 "sha384", 00:23:49.674 "sha512" 00:23:49.674 ], 00:23:49.674 "dhchap_dhgroups": [ 00:23:49.674 "null", 00:23:49.674 "ffdhe2048", 00:23:49.674 "ffdhe3072", 00:23:49.674 "ffdhe4096", 00:23:49.674 "ffdhe6144", 00:23:49.674 "ffdhe8192" 00:23:49.674 ], 00:23:49.674 "rdma_umr_per_io": false 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_nvme_set_hotplug", 00:23:49.674 "params": { 00:23:49.674 "period_us": 100000, 00:23:49.674 "enable": false 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_malloc_create", 00:23:49.674 "params": { 00:23:49.674 "name": "malloc0", 00:23:49.674 "num_blocks": 8192, 00:23:49.674 "block_size": 4096, 00:23:49.674 "physical_block_size": 4096, 00:23:49.674 "uuid": "8797fd6f-eceb-4a48-9c3b-fe05d39ea2b5", 00:23:49.674 "optimal_io_boundary": 0, 00:23:49.674 "md_size": 0, 00:23:49.674 "dif_type": 0, 00:23:49.674 "dif_is_head_of_md": false, 00:23:49.674 "dif_pi_format": 0 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "bdev_wait_for_examine" 00:23:49.674 } 00:23:49.674 ] 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "subsystem": "nbd", 00:23:49.674 "config": [] 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "subsystem": "scheduler", 00:23:49.674 "config": [ 00:23:49.674 { 00:23:49.674 "method": "framework_set_scheduler", 00:23:49.674 "params": { 00:23:49.674 "name": "static" 00:23:49.674 } 00:23:49.674 } 00:23:49.674 ] 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "subsystem": "nvmf", 00:23:49.674 "config": [ 00:23:49.674 { 00:23:49.674 "method": "nvmf_set_config", 00:23:49.674 "params": { 00:23:49.674 "discovery_filter": "match_any", 00:23:49.674 "admin_cmd_passthru": { 00:23:49.674 "identify_ctrlr": false 00:23:49.674 }, 00:23:49.674 "dhchap_digests": [ 00:23:49.674 "sha256", 00:23:49.674 "sha384", 00:23:49.674 "sha512" 00:23:49.674 ], 00:23:49.674 "dhchap_dhgroups": [ 00:23:49.674 "null", 00:23:49.674 "ffdhe2048", 00:23:49.674 "ffdhe3072", 00:23:49.674 "ffdhe4096", 00:23:49.674 "ffdhe6144", 00:23:49.674 "ffdhe8192" 00:23:49.674 ] 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_set_max_subsystems", 00:23:49.674 "params": { 00:23:49.674 "max_subsystems": 1024 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_set_crdt", 00:23:49.674 "params": { 00:23:49.674 "crdt1": 0, 00:23:49.674 "crdt2": 0, 00:23:49.674 "crdt3": 0 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_create_transport", 00:23:49.674 "params": { 00:23:49.674 "trtype": "TCP", 00:23:49.674 "max_queue_depth": 128, 00:23:49.674 "max_io_qpairs_per_ctrlr": 127, 00:23:49.674 "in_capsule_data_size": 4096, 00:23:49.674 "max_io_size": 131072, 00:23:49.674 "io_unit_size": 131072, 00:23:49.674 "max_aq_depth": 128, 00:23:49.674 "num_shared_buffers": 511, 00:23:49.674 "buf_cache_size": 4294967295, 00:23:49.674 "dif_insert_or_strip": false, 00:23:49.674 "zcopy": false, 00:23:49.674 "c2h_success": false, 00:23:49.674 "sock_priority": 0, 00:23:49.674 "abort_timeout_sec": 1, 00:23:49.674 "ack_timeout": 0, 00:23:49.674 "data_wr_pool_size": 0 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_create_subsystem", 00:23:49.674 "params": { 00:23:49.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.674 "allow_any_host": false, 00:23:49.674 "serial_number": "00000000000000000000", 00:23:49.674 "model_number": "SPDK bdev Controller", 00:23:49.674 "max_namespaces": 32, 00:23:49.674 "min_cntlid": 1, 00:23:49.674 "max_cntlid": 65519, 00:23:49.674 "ana_reporting": false 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_subsystem_add_host", 00:23:49.674 "params": { 00:23:49.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.674 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.674 "psk": "key0" 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "method": "nvmf_subsystem_add_ns", 00:23:49.675 "params": { 00:23:49.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.675 "namespace": { 00:23:49.675 "nsid": 1, 00:23:49.675 "bdev_name": "malloc0", 00:23:49.675 "nguid": "8797FD6FECEB4A489C3BFE05D39EA2B5", 00:23:49.675 "uuid": "8797fd6f-eceb-4a48-9c3b-fe05d39ea2b5", 00:23:49.675 "no_auto_visible": false 00:23:49.675 } 00:23:49.675 } 00:23:49.675 }, 00:23:49.675 { 00:23:49.675 "method": "nvmf_subsystem_add_listener", 00:23:49.675 "params": { 00:23:49.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.675 "listen_address": { 00:23:49.675 "trtype": "TCP", 00:23:49.675 "adrfam": "IPv4", 00:23:49.675 "traddr": "10.0.0.2", 00:23:49.675 "trsvcid": "4420" 00:23:49.675 }, 00:23:49.675 "secure_channel": false, 00:23:49.675 "sock_impl": "ssl" 00:23:49.675 } 00:23:49.675 } 00:23:49.675 ] 00:23:49.675 } 00:23:49.675 ] 00:23:49.675 }' 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025039 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025039 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025039 ']' 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.675 06:14:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.675 [2024-12-15 06:14:09.760540] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:49.675 [2024-12-15 06:14:09.760585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.935 [2024-12-15 06:14:09.838674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.935 [2024-12-15 06:14:09.859753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.935 [2024-12-15 06:14:09.859789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.935 [2024-12-15 06:14:09.859797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.935 [2024-12-15 06:14:09.859807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.935 [2024-12-15 06:14:09.859812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.935 [2024-12-15 06:14:09.860365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.935 [2024-12-15 06:14:10.071781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.193 [2024-12-15 06:14:10.103809] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.193 [2024-12-15 06:14:10.103985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.452 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.452 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.452 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.452 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.452 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1025269 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1025269 /var/tmp/bdevperf.sock 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025269 ']' 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.712 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:50.712 "subsystems": [ 00:23:50.712 { 00:23:50.712 "subsystem": "keyring", 00:23:50.712 "config": [ 00:23:50.712 { 00:23:50.712 "method": "keyring_file_add_key", 00:23:50.712 "params": { 00:23:50.712 "name": "key0", 00:23:50.712 "path": "/tmp/tmp.z0VrvUvZQZ" 00:23:50.712 } 00:23:50.712 } 00:23:50.712 ] 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "subsystem": "iobuf", 00:23:50.712 "config": [ 00:23:50.712 { 00:23:50.712 "method": "iobuf_set_options", 00:23:50.712 "params": { 00:23:50.712 "small_pool_count": 8192, 00:23:50.712 "large_pool_count": 1024, 00:23:50.712 "small_bufsize": 8192, 00:23:50.712 "large_bufsize": 135168, 00:23:50.712 "enable_numa": false 00:23:50.712 } 00:23:50.712 } 00:23:50.712 ] 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "subsystem": "sock", 00:23:50.712 "config": [ 00:23:50.712 { 00:23:50.712 "method": "sock_set_default_impl", 00:23:50.712 "params": { 00:23:50.712 "impl_name": "posix" 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "sock_impl_set_options", 00:23:50.712 "params": { 00:23:50.712 "impl_name": "ssl", 00:23:50.712 "recv_buf_size": 4096, 00:23:50.712 "send_buf_size": 4096, 00:23:50.712 "enable_recv_pipe": true, 00:23:50.712 "enable_quickack": false, 00:23:50.712 "enable_placement_id": 0, 00:23:50.712 "enable_zerocopy_send_server": true, 00:23:50.712 "enable_zerocopy_send_client": false, 00:23:50.712 "zerocopy_threshold": 0, 00:23:50.712 "tls_version": 0, 00:23:50.712 "enable_ktls": false 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "sock_impl_set_options", 00:23:50.712 "params": { 00:23:50.712 "impl_name": "posix", 00:23:50.712 "recv_buf_size": 2097152, 00:23:50.712 "send_buf_size": 2097152, 00:23:50.712 "enable_recv_pipe": true, 00:23:50.712 "enable_quickack": false, 00:23:50.712 "enable_placement_id": 0, 00:23:50.712 "enable_zerocopy_send_server": true, 00:23:50.712 "enable_zerocopy_send_client": false, 00:23:50.712 "zerocopy_threshold": 0, 00:23:50.712 "tls_version": 0, 00:23:50.712 "enable_ktls": false 00:23:50.712 } 00:23:50.712 } 00:23:50.712 ] 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "subsystem": "vmd", 00:23:50.712 "config": [] 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "subsystem": "accel", 00:23:50.712 "config": [ 00:23:50.712 { 00:23:50.712 "method": "accel_set_options", 00:23:50.712 "params": { 00:23:50.712 "small_cache_size": 128, 00:23:50.712 "large_cache_size": 16, 00:23:50.712 "task_count": 2048, 00:23:50.712 "sequence_count": 2048, 00:23:50.712 "buf_count": 2048 00:23:50.712 } 00:23:50.712 } 00:23:50.712 ] 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "subsystem": "bdev", 00:23:50.712 "config": [ 00:23:50.712 { 00:23:50.712 "method": "bdev_set_options", 00:23:50.712 "params": { 00:23:50.712 "bdev_io_pool_size": 65535, 00:23:50.712 "bdev_io_cache_size": 256, 00:23:50.712 "bdev_auto_examine": true, 00:23:50.712 "iobuf_small_cache_size": 128, 00:23:50.712 "iobuf_large_cache_size": 16 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "bdev_raid_set_options", 00:23:50.712 "params": { 00:23:50.712 "process_window_size_kb": 1024, 00:23:50.712 "process_max_bandwidth_mb_sec": 0 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "bdev_iscsi_set_options", 00:23:50.712 "params": { 00:23:50.712 "timeout_sec": 30 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "bdev_nvme_set_options", 00:23:50.712 "params": { 00:23:50.712 "action_on_timeout": "none", 00:23:50.712 "timeout_us": 0, 00:23:50.712 "timeout_admin_us": 0, 00:23:50.712 "keep_alive_timeout_ms": 10000, 00:23:50.712 "arbitration_burst": 0, 00:23:50.712 "low_priority_weight": 0, 00:23:50.712 "medium_priority_weight": 0, 00:23:50.712 "high_priority_weight": 0, 00:23:50.712 "nvme_adminq_poll_period_us": 10000, 00:23:50.712 "nvme_ioq_poll_period_us": 0, 00:23:50.712 "io_queue_requests": 512, 00:23:50.712 "delay_cmd_submit": true, 00:23:50.712 "transport_retry_count": 4, 00:23:50.712 "bdev_retry_count": 3, 00:23:50.712 "transport_ack_timeout": 0, 00:23:50.712 "ctrlr_loss_timeout_sec": 0, 00:23:50.712 "reconnect_delay_sec": 0, 00:23:50.712 "fast_io_fail_timeout_sec": 0, 00:23:50.712 "disable_auto_failback": false, 00:23:50.712 "generate_uuids": false, 00:23:50.712 "transport_tos": 0, 00:23:50.712 "nvme_error_stat": false, 00:23:50.712 "rdma_srq_size": 0, 00:23:50.712 "io_path_stat": false, 00:23:50.712 "allow_accel_sequence": false, 00:23:50.712 "rdma_max_cq_size": 0, 00:23:50.712 "rdma_cm_event_timeout_ms": 0, 00:23:50.712 "dhchap_digests": [ 00:23:50.712 "sha256", 00:23:50.712 "sha384", 00:23:50.712 "sha512" 00:23:50.712 ], 00:23:50.712 "dhchap_dhgroups": [ 00:23:50.712 "null", 00:23:50.712 "ffdhe2048", 00:23:50.712 "ffdhe3072", 00:23:50.712 "ffdhe4096", 00:23:50.712 "ffdhe6144", 00:23:50.712 "ffdhe8192" 00:23:50.712 ], 00:23:50.712 "rdma_umr_per_io": false 00:23:50.712 } 00:23:50.712 }, 00:23:50.712 { 00:23:50.712 "method": "bdev_nvme_attach_controller", 00:23:50.712 "params": { 00:23:50.712 "name": "nvme0", 00:23:50.712 "trtype": "TCP", 00:23:50.712 "adrfam": "IPv4", 00:23:50.712 "traddr": "10.0.0.2", 00:23:50.712 "trsvcid": "4420", 00:23:50.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.712 "prchk_reftag": false, 00:23:50.712 "prchk_guard": false, 00:23:50.712 "ctrlr_loss_timeout_sec": 0, 00:23:50.712 "reconnect_delay_sec": 0, 00:23:50.712 "fast_io_fail_timeout_sec": 0, 00:23:50.712 "psk": "key0", 00:23:50.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.712 "hdgst": false, 00:23:50.712 "ddgst": false, 00:23:50.712 "multipath": "multipath" 00:23:50.712 } 00:23:50.712 }, 00:23:50.713 { 00:23:50.713 "method": "bdev_nvme_set_hotplug", 00:23:50.713 "params": { 00:23:50.713 "period_us": 100000, 00:23:50.713 "enable": false 00:23:50.713 } 00:23:50.713 }, 00:23:50.713 { 00:23:50.713 "method": "bdev_enable_histogram", 00:23:50.713 "params": { 00:23:50.713 "name": "nvme0n1", 00:23:50.713 "enable": true 00:23:50.713 } 00:23:50.713 }, 00:23:50.713 { 00:23:50.713 "method": "bdev_wait_for_examine" 00:23:50.713 } 00:23:50.713 ] 00:23:50.713 }, 00:23:50.713 { 00:23:50.713 "subsystem": "nbd", 00:23:50.713 "config": [] 00:23:50.713 } 00:23:50.713 ] 00:23:50.713 }' 00:23:50.713 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.713 06:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.713 [2024-12-15 06:14:10.662334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:50.713 [2024-12-15 06:14:10.662382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025269 ] 00:23:50.713 [2024-12-15 06:14:10.735146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.713 [2024-12-15 06:14:10.757098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.971 [2024-12-15 06:14:10.904838] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.539 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.539 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:51.539 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:51.539 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:51.798 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.798 06:14:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.798 Running I/O for 1 seconds... 00:23:52.807 4855.00 IOPS, 18.96 MiB/s 00:23:52.807 Latency(us) 00:23:52.807 [2024-12-15T05:14:12.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.807 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:52.807 Verification LBA range: start 0x0 length 0x2000 00:23:52.807 nvme0n1 : 1.03 4835.16 18.89 0.00 0.00 26146.44 5804.62 29709.65 00:23:52.807 [2024-12-15T05:14:12.947Z] =================================================================================================================== 00:23:52.807 [2024-12-15T05:14:12.947Z] Total : 4835.16 18.89 0.00 0.00 26146.44 5804.62 29709.65 00:23:52.807 { 00:23:52.807 "results": [ 00:23:52.807 { 00:23:52.807 "job": "nvme0n1", 00:23:52.807 "core_mask": "0x2", 00:23:52.807 "workload": "verify", 00:23:52.807 "status": "finished", 00:23:52.807 "verify_range": { 00:23:52.807 "start": 0, 00:23:52.807 "length": 8192 00:23:52.807 }, 00:23:52.807 "queue_depth": 128, 00:23:52.807 "io_size": 4096, 00:23:52.807 "runtime": 1.030576, 00:23:52.807 "iops": 4835.160143453758, 00:23:52.807 "mibps": 18.88734431036624, 00:23:52.807 "io_failed": 0, 00:23:52.807 "io_timeout": 0, 00:23:52.807 "avg_latency_us": 26146.435905698418, 00:23:52.807 "min_latency_us": 5804.617142857142, 00:23:52.807 "max_latency_us": 29709.653333333332 00:23:52.807 } 00:23:52.807 ], 00:23:52.807 "core_count": 1 00:23:52.807 } 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:52.808 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:52.808 nvmf_trace.0 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1025269 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025269 ']' 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025269 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025269 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025269' 00:23:53.086 killing process with pid 1025269 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025269 00:23:53.086 Received shutdown signal, test time was about 1.000000 seconds 00:23:53.086 00:23:53.086 Latency(us) 00:23:53.086 [2024-12-15T05:14:13.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.086 [2024-12-15T05:14:13.226Z] =================================================================================================================== 00:23:53.086 [2024-12-15T05:14:13.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.086 06:14:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025269 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.086 rmmod nvme_tcp 00:23:53.086 rmmod nvme_fabrics 00:23:53.086 rmmod nvme_keyring 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1025039 ']' 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1025039 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025039 ']' 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025039 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.086 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025039 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025039' 00:23:53.345 killing process with pid 1025039 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025039 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025039 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.345 06:14:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.whJkQn2VJC /tmp/tmp.6s2Xw81u8E /tmp/tmp.z0VrvUvZQZ 00:23:55.882 00:23:55.882 real 1m18.581s 00:23:55.882 user 1m59.521s 00:23:55.882 sys 0m31.018s 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.882 ************************************ 00:23:55.882 END TEST nvmf_tls 00:23:55.882 ************************************ 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:55.882 ************************************ 00:23:55.882 START TEST nvmf_fips 00:23:55.882 ************************************ 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:55.882 * Looking for test storage... 00:23:55.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.882 --rc genhtml_branch_coverage=1 00:23:55.882 --rc genhtml_function_coverage=1 00:23:55.882 --rc genhtml_legend=1 00:23:55.882 --rc geninfo_all_blocks=1 00:23:55.882 --rc geninfo_unexecuted_blocks=1 00:23:55.882 00:23:55.882 ' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.882 --rc genhtml_branch_coverage=1 00:23:55.882 --rc genhtml_function_coverage=1 00:23:55.882 --rc genhtml_legend=1 00:23:55.882 --rc geninfo_all_blocks=1 00:23:55.882 --rc geninfo_unexecuted_blocks=1 00:23:55.882 00:23:55.882 ' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.882 --rc genhtml_branch_coverage=1 00:23:55.882 --rc genhtml_function_coverage=1 00:23:55.882 --rc genhtml_legend=1 00:23:55.882 --rc geninfo_all_blocks=1 00:23:55.882 --rc geninfo_unexecuted_blocks=1 00:23:55.882 00:23:55.882 ' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.882 --rc genhtml_branch_coverage=1 00:23:55.882 --rc genhtml_function_coverage=1 00:23:55.882 --rc genhtml_legend=1 00:23:55.882 --rc geninfo_all_blocks=1 00:23:55.882 --rc geninfo_unexecuted_blocks=1 00:23:55.882 00:23:55.882 ' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.882 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:55.883 Error setting digest 00:23:55.883 40326D1E277F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:55.883 40326D1E277F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:55.883 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.884 06:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:02.456 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:02.456 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:02.456 Found net devices under 0000:af:00.0: cvl_0_0 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:02.456 Found net devices under 0000:af:00.1: cvl_0_1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:02.456 00:24:02.456 --- 10.0.0.2 ping statistics --- 00:24:02.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.456 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:02.456 00:24:02.456 --- 10.0.0.1 ping statistics --- 00:24:02.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.456 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.456 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1029225 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1029225 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029225 ']' 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.457 06:14:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 [2024-12-15 06:14:21.872980] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:02.457 [2024-12-15 06:14:21.873038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.457 [2024-12-15 06:14:21.949293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.457 [2024-12-15 06:14:21.969870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.457 [2024-12-15 06:14:21.969906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.457 [2024-12-15 06:14:21.969916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.457 [2024-12-15 06:14:21.969923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.457 [2024-12-15 06:14:21.969927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.457 [2024-12-15 06:14:21.970400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ihp 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ihp 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ihp 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ihp 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.457 [2024-12-15 06:14:22.276875] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.457 [2024-12-15 06:14:22.292883] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.457 [2024-12-15 06:14:22.293088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.457 malloc0 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1029248 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1029248 /var/tmp/bdevperf.sock 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029248 ']' 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.457 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:02.457 [2024-12-15 06:14:22.419429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:02.457 [2024-12-15 06:14:22.419474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029248 ] 00:24:02.457 [2024-12-15 06:14:22.492717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.457 [2024-12-15 06:14:22.514916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.716 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.716 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:02.716 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ihp 00:24:02.716 06:14:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.975 [2024-12-15 06:14:22.991039] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.975 TLSTESTn1 00:24:02.975 06:14:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.234 Running I/O for 10 seconds... 00:24:05.106 5264.00 IOPS, 20.56 MiB/s [2024-12-15T05:14:26.621Z] 5380.50 IOPS, 21.02 MiB/s [2024-12-15T05:14:27.558Z] 5409.00 IOPS, 21.13 MiB/s [2024-12-15T05:14:28.494Z] 5438.00 IOPS, 21.24 MiB/s [2024-12-15T05:14:29.430Z] 5451.00 IOPS, 21.29 MiB/s [2024-12-15T05:14:30.365Z] 5459.50 IOPS, 21.33 MiB/s [2024-12-15T05:14:31.302Z] 5460.57 IOPS, 21.33 MiB/s [2024-12-15T05:14:32.239Z] 5474.25 IOPS, 21.38 MiB/s [2024-12-15T05:14:33.617Z] 5483.78 IOPS, 21.42 MiB/s [2024-12-15T05:14:33.617Z] 5459.80 IOPS, 21.33 MiB/s 00:24:13.477 Latency(us) 00:24:13.477 [2024-12-15T05:14:33.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:13.477 Verification LBA range: start 0x0 length 0x2000 00:24:13.477 TLSTESTn1 : 10.02 5463.20 21.34 0.00 0.00 23394.08 6428.77 53677.10 00:24:13.477 [2024-12-15T05:14:33.617Z] =================================================================================================================== 00:24:13.477 [2024-12-15T05:14:33.617Z] Total : 5463.20 21.34 0.00 0.00 23394.08 6428.77 53677.10 00:24:13.477 { 00:24:13.477 "results": [ 00:24:13.477 { 00:24:13.477 "job": "TLSTESTn1", 00:24:13.477 "core_mask": "0x4", 00:24:13.477 "workload": "verify", 00:24:13.477 "status": "finished", 00:24:13.477 "verify_range": { 00:24:13.477 "start": 0, 00:24:13.477 "length": 8192 00:24:13.477 }, 00:24:13.477 "queue_depth": 128, 00:24:13.477 "io_size": 4096, 00:24:13.477 "runtime": 10.016849, 00:24:13.477 "iops": 5463.19506263896, 00:24:13.477 "mibps": 21.340605713433437, 00:24:13.477 "io_failed": 0, 00:24:13.477 "io_timeout": 0, 00:24:13.477 "avg_latency_us": 23394.083853241023, 00:24:13.477 "min_latency_us": 6428.769523809524, 00:24:13.477 "max_latency_us": 53677.10476190476 00:24:13.477 } 00:24:13.477 ], 00:24:13.477 "core_count": 1 00:24:13.477 } 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:13.477 nvmf_trace.0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1029248 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029248 ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029248 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029248 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029248' 00:24:13.477 killing process with pid 1029248 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029248 00:24:13.477 Received shutdown signal, test time was about 10.000000 seconds 00:24:13.477 00:24:13.477 Latency(us) 00:24:13.477 [2024-12-15T05:14:33.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.477 [2024-12-15T05:14:33.617Z] =================================================================================================================== 00:24:13.477 [2024-12-15T05:14:33.617Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029248 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.477 rmmod nvme_tcp 00:24:13.477 rmmod nvme_fabrics 00:24:13.477 rmmod nvme_keyring 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1029225 ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1029225 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029225 ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029225 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.477 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029225 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029225' 00:24:13.737 killing process with pid 1029225 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029225 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029225 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.737 06:14:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ihp 00:24:16.276 00:24:16.276 real 0m20.316s 00:24:16.276 user 0m21.121s 00:24:16.276 sys 0m9.654s 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.276 ************************************ 00:24:16.276 END TEST nvmf_fips 00:24:16.276 ************************************ 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.276 ************************************ 00:24:16.276 START TEST nvmf_control_msg_list 00:24:16.276 ************************************ 00:24:16.276 06:14:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:16.276 * Looking for test storage... 00:24:16.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.276 --rc genhtml_branch_coverage=1 00:24:16.276 --rc genhtml_function_coverage=1 00:24:16.276 --rc genhtml_legend=1 00:24:16.276 --rc geninfo_all_blocks=1 00:24:16.276 --rc geninfo_unexecuted_blocks=1 00:24:16.276 00:24:16.276 ' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.276 --rc genhtml_branch_coverage=1 00:24:16.276 --rc genhtml_function_coverage=1 00:24:16.276 --rc genhtml_legend=1 00:24:16.276 --rc geninfo_all_blocks=1 00:24:16.276 --rc geninfo_unexecuted_blocks=1 00:24:16.276 00:24:16.276 ' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.276 --rc genhtml_branch_coverage=1 00:24:16.276 --rc genhtml_function_coverage=1 00:24:16.276 --rc genhtml_legend=1 00:24:16.276 --rc geninfo_all_blocks=1 00:24:16.276 --rc geninfo_unexecuted_blocks=1 00:24:16.276 00:24:16.276 ' 00:24:16.276 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.276 --rc genhtml_branch_coverage=1 00:24:16.277 --rc genhtml_function_coverage=1 00:24:16.277 --rc genhtml_legend=1 00:24:16.277 --rc geninfo_all_blocks=1 00:24:16.277 --rc geninfo_unexecuted_blocks=1 00:24:16.277 00:24:16.277 ' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.277 06:14:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.847 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:22.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:22.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:22.848 Found net devices under 0000:af:00.0: cvl_0_0 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:22.848 Found net devices under 0000:af:00.1: cvl_0_1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.848 06:14:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:24:22.848 00:24:22.848 --- 10.0.0.2 ping statistics --- 00:24:22.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.848 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:22.848 00:24:22.848 --- 10.0.0.1 ping statistics --- 00:24:22.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.848 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1034506 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1034506 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1034506 ']' 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.848 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 [2024-12-15 06:14:42.108939] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:22.849 [2024-12-15 06:14:42.108982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.849 [2024-12-15 06:14:42.186170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.849 [2024-12-15 06:14:42.206898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.849 [2024-12-15 06:14:42.206934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.849 [2024-12-15 06:14:42.206941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.849 [2024-12-15 06:14:42.206948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.849 [2024-12-15 06:14:42.206954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.849 [2024-12-15 06:14:42.207439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 [2024-12-15 06:14:42.350269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 Malloc0 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.849 [2024-12-15 06:14:42.390552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1034557 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1034559 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1034561 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.849 06:14:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1034557 00:24:22.849 [2024-12-15 06:14:42.469053] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:22.849 [2024-12-15 06:14:42.469227] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:22.849 [2024-12-15 06:14:42.479063] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.415 Initializing NVMe Controllers 00:24:23.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:23.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:23.415 Initialization complete. Launching workers. 00:24:23.415 ======================================================== 00:24:23.415 Latency(us) 00:24:23.415 Device Information : IOPS MiB/s Average min max 00:24:23.415 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6297.97 24.60 158.44 126.66 40270.25 00:24:23.415 ======================================================== 00:24:23.415 Total : 6297.97 24.60 158.44 126.66 40270.25 00:24:23.415 00:24:23.673 Initializing NVMe Controllers 00:24:23.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:23.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:23.673 Initialization complete. Launching workers. 00:24:23.673 ======================================================== 00:24:23.673 Latency(us) 00:24:23.673 Device Information : IOPS MiB/s Average min max 00:24:23.673 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41026.93 40812.37 41886.40 00:24:23.673 ======================================================== 00:24:23.673 Total : 25.00 0.10 41026.93 40812.37 41886.40 00:24:23.673 00:24:23.673 Initializing NVMe Controllers 00:24:23.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:23.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:23.673 Initialization complete. Launching workers. 00:24:23.673 ======================================================== 00:24:23.673 Latency(us) 00:24:23.673 Device Information : IOPS MiB/s Average min max 00:24:23.673 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6680.99 26.10 149.33 124.72 324.93 00:24:23.673 ======================================================== 00:24:23.673 Total : 6680.99 26.10 149.33 124.72 324.93 00:24:23.673 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1034559 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1034561 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.673 rmmod nvme_tcp 00:24:23.673 rmmod nvme_fabrics 00:24:23.673 rmmod nvme_keyring 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:23.673 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1034506 ']' 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1034506 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1034506 ']' 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1034506 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1034506 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1034506' 00:24:23.674 killing process with pid 1034506 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1034506 00:24:23.674 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1034506 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.932 06:14:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.465 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.465 00:24:26.465 real 0m10.039s 00:24:26.465 user 0m6.468s 00:24:26.465 sys 0m5.469s 00:24:26.465 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.465 06:14:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.465 ************************************ 00:24:26.465 END TEST nvmf_control_msg_list 00:24:26.465 ************************************ 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.465 ************************************ 00:24:26.465 START TEST nvmf_wait_for_buf 00:24:26.465 ************************************ 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:26.465 * Looking for test storage... 00:24:26.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.465 --rc genhtml_branch_coverage=1 00:24:26.465 --rc genhtml_function_coverage=1 00:24:26.465 --rc genhtml_legend=1 00:24:26.465 --rc geninfo_all_blocks=1 00:24:26.465 --rc geninfo_unexecuted_blocks=1 00:24:26.465 00:24:26.465 ' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.465 --rc genhtml_branch_coverage=1 00:24:26.465 --rc genhtml_function_coverage=1 00:24:26.465 --rc genhtml_legend=1 00:24:26.465 --rc geninfo_all_blocks=1 00:24:26.465 --rc geninfo_unexecuted_blocks=1 00:24:26.465 00:24:26.465 ' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.465 --rc genhtml_branch_coverage=1 00:24:26.465 --rc genhtml_function_coverage=1 00:24:26.465 --rc genhtml_legend=1 00:24:26.465 --rc geninfo_all_blocks=1 00:24:26.465 --rc geninfo_unexecuted_blocks=1 00:24:26.465 00:24:26.465 ' 00:24:26.465 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.465 --rc genhtml_branch_coverage=1 00:24:26.466 --rc genhtml_function_coverage=1 00:24:26.466 --rc genhtml_legend=1 00:24:26.466 --rc geninfo_all_blocks=1 00:24:26.466 --rc geninfo_unexecuted_blocks=1 00:24:26.466 00:24:26.466 ' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.466 06:14:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.736 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.995 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.995 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.995 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.995 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.995 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.996 06:14:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:24:31.996 00:24:31.996 --- 10.0.0.2 ping statistics --- 00:24:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.996 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:24:31.996 00:24:31.996 --- 10.0.0.1 ping statistics --- 00:24:31.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.996 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.996 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1038215 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1038215 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1038215 ']' 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.255 [2024-12-15 06:14:52.223955] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:32.255 [2024-12-15 06:14:52.224012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.255 [2024-12-15 06:14:52.305287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.255 [2024-12-15 06:14:52.327082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.255 [2024-12-15 06:14:52.327115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.255 [2024-12-15 06:14:52.327122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.255 [2024-12-15 06:14:52.327128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.255 [2024-12-15 06:14:52.327133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.255 [2024-12-15 06:14:52.327578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.255 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 Malloc0 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 [2024-12-15 06:14:52.508917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.514 [2024-12-15 06:14:52.537105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.514 06:14:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.514 [2024-12-15 06:14:52.621073] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.417 Initializing NVMe Controllers 00:24:34.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:34.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:34.417 Initialization complete. Launching workers. 00:24:34.417 ======================================================== 00:24:34.417 Latency(us) 00:24:34.417 Device Information : IOPS MiB/s Average min max 00:24:34.417 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32239.83 7279.31 63864.63 00:24:34.417 ======================================================== 00:24:34.417 Total : 129.00 16.12 32239.83 7279.31 63864.63 00:24:34.417 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:34.417 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.418 rmmod nvme_tcp 00:24:34.418 rmmod nvme_fabrics 00:24:34.418 rmmod nvme_keyring 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1038215 ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1038215 ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038215' 00:24:34.418 killing process with pid 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1038215 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.418 06:14:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.951 00:24:36.951 real 0m10.421s 00:24:36.951 user 0m4.020s 00:24:36.951 sys 0m4.850s 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:36.951 ************************************ 00:24:36.951 END TEST nvmf_wait_for_buf 00:24:36.951 ************************************ 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.951 ************************************ 00:24:36.951 START TEST nvmf_fuzz 00:24:36.951 ************************************ 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:36.951 * Looking for test storage... 00:24:36.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.951 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.952 --rc genhtml_branch_coverage=1 00:24:36.952 --rc genhtml_function_coverage=1 00:24:36.952 --rc genhtml_legend=1 00:24:36.952 --rc geninfo_all_blocks=1 00:24:36.952 --rc geninfo_unexecuted_blocks=1 00:24:36.952 00:24:36.952 ' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.952 --rc genhtml_branch_coverage=1 00:24:36.952 --rc genhtml_function_coverage=1 00:24:36.952 --rc genhtml_legend=1 00:24:36.952 --rc geninfo_all_blocks=1 00:24:36.952 --rc geninfo_unexecuted_blocks=1 00:24:36.952 00:24:36.952 ' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.952 --rc genhtml_branch_coverage=1 00:24:36.952 --rc genhtml_function_coverage=1 00:24:36.952 --rc genhtml_legend=1 00:24:36.952 --rc geninfo_all_blocks=1 00:24:36.952 --rc geninfo_unexecuted_blocks=1 00:24:36.952 00:24:36.952 ' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:36.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.952 --rc genhtml_branch_coverage=1 00:24:36.952 --rc genhtml_function_coverage=1 00:24:36.952 --rc genhtml_legend=1 00:24:36.952 --rc geninfo_all_blocks=1 00:24:36.952 --rc geninfo_unexecuted_blocks=1 00:24:36.952 00:24:36.952 ' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.952 06:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:43.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:43.518 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:43.518 Found net devices under 0000:af:00.0: cvl_0_0 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:43.518 Found net devices under 0000:af:00.1: cvl_0_1 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.518 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:43.519 00:24:43.519 --- 10.0.0.2 ping statistics --- 00:24:43.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.519 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:43.519 00:24:43.519 --- 10.0.0.1 ping statistics --- 00:24:43.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.519 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1042257 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1042257 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1042257 ']' 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 Malloc0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.519 06:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:43.519 06:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:15.593 Fuzzing completed. Shutting down the fuzz application 00:25:15.593 00:25:15.593 Dumping successful admin opcodes: 00:25:15.593 9, 10, 00:25:15.593 Dumping successful io opcodes: 00:25:15.593 0, 9, 00:25:15.593 NS: 0x2000008eff00 I/O qp, Total commands completed: 875654, total successful commands: 5091, random_seed: 1278437312 00:25:15.593 NS: 0x2000008eff00 admin qp, Total commands completed: 81664, total successful commands: 19, random_seed: 1426171328 00:25:15.593 06:15:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:15.593 Fuzzing completed. Shutting down the fuzz application 00:25:15.593 00:25:15.593 Dumping successful admin opcodes: 00:25:15.593 00:25:15.593 Dumping successful io opcodes: 00:25:15.593 00:25:15.593 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3141325030 00:25:15.593 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3141389022 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.593 rmmod nvme_tcp 00:25:15.593 rmmod nvme_fabrics 00:25:15.593 rmmod nvme_keyring 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1042257 ']' 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1042257 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1042257 ']' 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1042257 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042257 00:25:15.593 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042257' 00:25:15.594 killing process with pid 1042257 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1042257 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1042257 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.594 06:15:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.970 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.970 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:16.970 00:25:16.970 real 0m40.524s 00:25:16.970 user 0m51.738s 00:25:16.970 sys 0m17.876s 00:25:16.970 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.970 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.970 ************************************ 00:25:16.970 END TEST nvmf_fuzz 00:25:16.970 ************************************ 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:17.229 ************************************ 00:25:17.229 START TEST nvmf_multiconnection 00:25:17.229 ************************************ 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:17.229 * Looking for test storage... 00:25:17.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:17.229 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.230 --rc genhtml_branch_coverage=1 00:25:17.230 --rc genhtml_function_coverage=1 00:25:17.230 --rc genhtml_legend=1 00:25:17.230 --rc geninfo_all_blocks=1 00:25:17.230 --rc geninfo_unexecuted_blocks=1 00:25:17.230 00:25:17.230 ' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.230 --rc genhtml_branch_coverage=1 00:25:17.230 --rc genhtml_function_coverage=1 00:25:17.230 --rc genhtml_legend=1 00:25:17.230 --rc geninfo_all_blocks=1 00:25:17.230 --rc geninfo_unexecuted_blocks=1 00:25:17.230 00:25:17.230 ' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.230 --rc genhtml_branch_coverage=1 00:25:17.230 --rc genhtml_function_coverage=1 00:25:17.230 --rc genhtml_legend=1 00:25:17.230 --rc geninfo_all_blocks=1 00:25:17.230 --rc geninfo_unexecuted_blocks=1 00:25:17.230 00:25:17.230 ' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:17.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.230 --rc genhtml_branch_coverage=1 00:25:17.230 --rc genhtml_function_coverage=1 00:25:17.230 --rc genhtml_legend=1 00:25:17.230 --rc geninfo_all_blocks=1 00:25:17.230 --rc geninfo_unexecuted_blocks=1 00:25:17.230 00:25:17.230 ' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.230 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:17.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:17.489 06:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:22.852 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:22.852 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:22.852 Found net devices under 0000:af:00.0: cvl_0_0 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:22.852 Found net devices under 0000:af:00.1: cvl_0_1 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.852 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.853 06:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.111 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.111 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.111 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.111 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:25:23.371 00:25:23.371 --- 10.0.0.2 ping statistics --- 00:25:23.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.371 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:23.371 00:25:23.371 --- 10.0.0.1 ping statistics --- 00:25:23.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.371 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1051218 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1051218 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1051218 ']' 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.371 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.371 [2024-12-15 06:15:43.415679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:23.371 [2024-12-15 06:15:43.415726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.371 [2024-12-15 06:15:43.479179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.371 [2024-12-15 06:15:43.503272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.371 [2024-12-15 06:15:43.503307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.371 [2024-12-15 06:15:43.503314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.371 [2024-12-15 06:15:43.503321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.371 [2024-12-15 06:15:43.503326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.371 [2024-12-15 06:15:43.504613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.371 [2024-12-15 06:15:43.504661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.371 [2024-12-15 06:15:43.504772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.371 [2024-12-15 06:15:43.504772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 [2024-12-15 06:15:43.644321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 Malloc1 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.630 [2024-12-15 06:15:43.708435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:23.630 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.631 Malloc2 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.631 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.890 Malloc3 00:25:23.890 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.890 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:23.890 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 Malloc4 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 Malloc5 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 Malloc6 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 Malloc7 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 Malloc8 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.891 06:15:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.891 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:23.891 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.892 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 Malloc9 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 Malloc10 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 Malloc11 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.151 06:15:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:25.528 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:25.528 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:25.528 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.528 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:25.528 06:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.433 06:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:28.370 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:28.370 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.370 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.370 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.370 06:15:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.904 06:15:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:31.840 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:31.840 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.840 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.840 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.840 06:15:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.746 06:15:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:35.123 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:35.123 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.123 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.123 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.123 06:15:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.023 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.024 06:15:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:38.399 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:38.399 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.399 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.399 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.399 06:15:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.304 06:16:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:41.679 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:41.679 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.679 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.679 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.679 06:16:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.581 06:16:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:44.956 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:44.956 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.956 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.956 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.956 06:16:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.856 06:16:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:48.231 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:48.231 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.231 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.489 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.489 06:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.391 06:16:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:51.765 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:51.765 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.765 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.765 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.765 06:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.357 06:16:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:55.291 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:55.291 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.291 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.291 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.291 06:16:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.819 06:16:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:58.754 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:58.754 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.754 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.754 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.754 06:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.654 06:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:00.654 [global] 00:26:00.654 thread=1 00:26:00.654 invalidate=1 00:26:00.654 rw=read 00:26:00.654 time_based=1 00:26:00.654 runtime=10 00:26:00.654 ioengine=libaio 00:26:00.654 direct=1 00:26:00.654 bs=262144 00:26:00.654 iodepth=64 00:26:00.654 norandommap=1 00:26:00.654 numjobs=1 00:26:00.654 00:26:00.654 [job0] 00:26:00.654 filename=/dev/nvme0n1 00:26:00.654 [job1] 00:26:00.654 filename=/dev/nvme10n1 00:26:00.654 [job2] 00:26:00.654 filename=/dev/nvme1n1 00:26:00.654 [job3] 00:26:00.654 filename=/dev/nvme2n1 00:26:00.654 [job4] 00:26:00.654 filename=/dev/nvme3n1 00:26:00.654 [job5] 00:26:00.654 filename=/dev/nvme4n1 00:26:00.654 [job6] 00:26:00.654 filename=/dev/nvme5n1 00:26:00.654 [job7] 00:26:00.654 filename=/dev/nvme6n1 00:26:00.654 [job8] 00:26:00.654 filename=/dev/nvme7n1 00:26:00.654 [job9] 00:26:00.654 filename=/dev/nvme8n1 00:26:00.654 [job10] 00:26:00.654 filename=/dev/nvme9n1 00:26:00.936 Could not set queue depth (nvme0n1) 00:26:00.936 Could not set queue depth (nvme10n1) 00:26:00.936 Could not set queue depth (nvme1n1) 00:26:00.936 Could not set queue depth (nvme2n1) 00:26:00.936 Could not set queue depth (nvme3n1) 00:26:00.936 Could not set queue depth (nvme4n1) 00:26:00.936 Could not set queue depth (nvme5n1) 00:26:00.936 Could not set queue depth (nvme6n1) 00:26:00.936 Could not set queue depth (nvme7n1) 00:26:00.936 Could not set queue depth (nvme8n1) 00:26:00.936 Could not set queue depth (nvme9n1) 00:26:01.201 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.201 fio-3.35 00:26:01.201 Starting 11 threads 00:26:13.407 00:26:13.407 job0: (groupid=0, jobs=1): err= 0: pid=1057538: Sun Dec 15 06:16:31 2024 00:26:13.407 read: IOPS=338, BW=84.7MiB/s (88.8MB/s)(858MiB/10132msec) 00:26:13.407 slat (usec): min=9, max=690316, avg=2082.14, stdev=22456.26 00:26:13.407 clat (usec): min=765, max=1501.3k, avg=186730.37, stdev=248876.24 00:26:13.407 lat (usec): min=816, max=1534.8k, avg=188812.51, stdev=251124.28 00:26:13.407 clat percentiles (msec): 00:26:13.407 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 12], 00:26:13.407 | 30.00th=[ 26], 40.00th=[ 53], 50.00th=[ 78], 60.00th=[ 140], 00:26:13.407 | 70.00th=[ 209], 80.00th=[ 313], 90.00th=[ 472], 95.00th=[ 785], 00:26:13.407 | 99.00th=[ 1183], 99.50th=[ 1250], 99.90th=[ 1502], 99.95th=[ 1502], 00:26:13.407 | 99.99th=[ 1502] 00:26:13.407 bw ( KiB/s): min=10240, max=411648, per=10.22%, avg=90758.74, stdev=91454.38, samples=19 00:26:13.407 iops : min= 40, max= 1608, avg=354.53, stdev=357.24, samples=19 00:26:13.407 lat (usec) : 1000=0.17% 00:26:13.407 lat (msec) : 2=0.58%, 4=9.01%, 10=8.95%, 20=4.58%, 50=16.06% 00:26:13.407 lat (msec) : 100=14.86%, 250=20.02%, 500=16.29%, 750=3.41%, 1000=4.69% 00:26:13.407 lat (msec) : 2000=1.37% 00:26:13.407 cpu : usr=0.11%, sys=1.13%, ctx=1287, majf=0, minf=3722 00:26:13.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.407 issued rwts: total=3431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.407 job1: (groupid=0, jobs=1): err= 0: pid=1057539: Sun Dec 15 06:16:31 2024 00:26:13.407 read: IOPS=281, BW=70.4MiB/s (73.8MB/s)(713MiB/10130msec) 00:26:13.407 slat (usec): min=15, max=547722, avg=1974.15, stdev=17886.00 00:26:13.407 clat (usec): min=1468, max=1357.9k, avg=225094.95, stdev=247640.17 00:26:13.407 lat (usec): min=1518, max=1357.9k, avg=227069.10, stdev=250189.65 00:26:13.407 clat percentiles (msec): 00:26:13.407 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 20], 20.00th=[ 31], 00:26:13.407 | 30.00th=[ 61], 40.00th=[ 103], 50.00th=[ 144], 60.00th=[ 199], 00:26:13.407 | 70.00th=[ 245], 80.00th=[ 347], 90.00th=[ 625], 95.00th=[ 718], 00:26:13.407 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1351], 99.95th=[ 1351], 00:26:13.407 | 99.99th=[ 1351] 00:26:13.407 bw ( KiB/s): min= 2560, max=201728, per=8.04%, avg=71372.80, stdev=61861.05, samples=20 00:26:13.407 iops : min= 10, max= 788, avg=278.80, stdev=241.64, samples=20 00:26:13.407 lat (msec) : 2=0.14%, 4=0.53%, 10=4.73%, 20=5.26%, 50=14.45% 00:26:13.407 lat (msec) : 100=14.76%, 250=31.42%, 500=13.85%, 750=11.12%, 1000=2.28% 00:26:13.407 lat (msec) : 2000=1.47% 00:26:13.407 cpu : usr=0.14%, sys=1.04%, ctx=1069, majf=0, minf=4097 00:26:13.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.407 issued rwts: total=2852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.407 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.407 job2: (groupid=0, jobs=1): err= 0: pid=1057544: Sun Dec 15 06:16:31 2024 00:26:13.407 read: IOPS=517, BW=129MiB/s (136MB/s)(1295MiB/10014msec) 00:26:13.407 slat (usec): min=8, max=279423, avg=1257.80, stdev=10166.76 00:26:13.407 clat (usec): min=697, max=979591, avg=122323.74, stdev=188140.80 00:26:13.407 lat (usec): min=740, max=979621, avg=123581.54, stdev=189896.15 00:26:13.407 clat percentiles (msec): 00:26:13.407 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 29], 00:26:13.407 | 30.00th=[ 30], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 48], 00:26:13.407 | 70.00th=[ 80], 80.00th=[ 146], 90.00th=[ 456], 95.00th=[ 600], 00:26:13.407 | 99.00th=[ 785], 99.50th=[ 860], 99.90th=[ 978], 99.95th=[ 978], 00:26:13.407 | 99.99th=[ 978] 00:26:13.407 bw ( KiB/s): min=14848, max=531968, per=14.75%, avg=131020.80, stdev=158662.87, samples=20 00:26:13.407 iops : min= 58, max= 2078, avg=511.80, stdev=619.78, samples=20 00:26:13.407 lat (usec) : 750=0.02%, 1000=0.04% 00:26:13.407 lat (msec) : 2=0.46%, 4=0.50%, 10=1.78%, 20=3.24%, 50=55.99% 00:26:13.407 lat (msec) : 100=11.64%, 250=12.41%, 500=5.08%, 750=7.20%, 1000=1.64% 00:26:13.407 cpu : usr=0.12%, sys=1.92%, ctx=1161, majf=0, minf=4097 00:26:13.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:13.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.407 issued rwts: total=5181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job3: (groupid=0, jobs=1): err= 0: pid=1057547: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=492, BW=123MiB/s (129MB/s)(1244MiB/10103msec) 00:26:13.408 slat (usec): min=9, max=222707, avg=1727.49, stdev=11836.15 00:26:13.408 clat (msec): min=15, max=915, avg=128.11, stdev=192.57 00:26:13.408 lat (msec): min=15, max=1002, avg=129.84, stdev=195.06 00:26:13.408 clat percentiles (msec): 00:26:13.408 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 29], 00:26:13.408 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 33], 60.00th=[ 42], 00:26:13.408 | 70.00th=[ 75], 80.00th=[ 167], 90.00th=[ 464], 95.00th=[ 592], 00:26:13.408 | 99.00th=[ 785], 99.50th=[ 860], 99.90th=[ 894], 99.95th=[ 894], 00:26:13.408 | 99.99th=[ 919] 00:26:13.408 bw ( KiB/s): min=18432, max=596992, per=14.16%, avg=125725.35, stdev=181303.44, samples=20 00:26:13.408 iops : min= 72, max= 2332, avg=491.10, stdev=708.22, samples=20 00:26:13.408 lat (msec) : 20=0.04%, 50=64.28%, 100=9.03%, 250=9.75%, 500=7.98% 00:26:13.408 lat (msec) : 750=6.81%, 1000=2.11% 00:26:13.408 cpu : usr=0.12%, sys=1.94%, ctx=704, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=4975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job4: (groupid=0, jobs=1): err= 0: pid=1057551: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=265, BW=66.3MiB/s (69.5MB/s)(671MiB/10124msec) 00:26:13.408 slat (usec): min=15, max=595751, avg=2453.28, stdev=18340.11 00:26:13.408 clat (usec): min=1280, max=1006.5k, avg=238691.73, stdev=212622.61 00:26:13.408 lat (usec): min=1341, max=1251.9k, avg=241145.00, stdev=214717.95 00:26:13.408 clat percentiles (msec): 00:26:13.408 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 27], 20.00th=[ 73], 00:26:13.408 | 30.00th=[ 92], 40.00th=[ 142], 50.00th=[ 182], 60.00th=[ 226], 00:26:13.408 | 70.00th=[ 279], 80.00th=[ 372], 90.00th=[ 550], 95.00th=[ 726], 00:26:13.408 | 99.00th=[ 902], 99.50th=[ 936], 99.90th=[ 969], 99.95th=[ 1003], 00:26:13.408 | 99.99th=[ 1003] 00:26:13.408 bw ( KiB/s): min= 7168, max=181248, per=7.55%, avg=67072.00, stdev=45691.82, samples=20 00:26:13.408 iops : min= 28, max= 708, avg=262.00, stdev=178.48, samples=20 00:26:13.408 lat (msec) : 2=0.07%, 4=0.41%, 10=5.03%, 20=3.28%, 50=4.77% 00:26:13.408 lat (msec) : 100=18.41%, 250=32.68%, 500=22.99%, 750=7.75%, 1000=4.55% 00:26:13.408 lat (msec) : 2000=0.07% 00:26:13.408 cpu : usr=0.14%, sys=0.86%, ctx=631, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=2684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job5: (groupid=0, jobs=1): err= 0: pid=1057575: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=342, BW=85.6MiB/s (89.8MB/s)(865MiB/10102msec) 00:26:13.408 slat (usec): min=15, max=221074, avg=1561.92, stdev=9414.02 00:26:13.408 clat (usec): min=912, max=1184.8k, avg=185079.26, stdev=185089.39 00:26:13.408 lat (usec): min=942, max=1184.9k, avg=186641.18, stdev=186138.26 00:26:13.408 clat percentiles (msec): 00:26:13.408 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 20], 20.00th=[ 39], 00:26:13.408 | 30.00th=[ 73], 40.00th=[ 100], 50.00th=[ 142], 60.00th=[ 184], 00:26:13.408 | 70.00th=[ 228], 80.00th=[ 271], 90.00th=[ 368], 95.00th=[ 558], 00:26:13.408 | 99.00th=[ 1036], 99.50th=[ 1116], 99.90th=[ 1133], 99.95th=[ 1167], 00:26:13.408 | 99.99th=[ 1183] 00:26:13.408 bw ( KiB/s): min=24576, max=175616, per=9.79%, avg=86937.60, stdev=48685.96, samples=20 00:26:13.408 iops : min= 96, max= 686, avg=339.60, stdev=190.18, samples=20 00:26:13.408 lat (usec) : 1000=0.12% 00:26:13.408 lat (msec) : 2=0.61%, 4=0.61%, 10=4.10%, 20=4.60%, 50=14.13% 00:26:13.408 lat (msec) : 100=16.07%, 250=33.76%, 500=18.58%, 750=5.78%, 1000=0.46% 00:26:13.408 lat (msec) : 2000=1.18% 00:26:13.408 cpu : usr=0.11%, sys=1.31%, ctx=1116, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=3460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job6: (groupid=0, jobs=1): err= 0: pid=1057598: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=282, BW=70.6MiB/s (74.1MB/s)(709MiB/10042msec) 00:26:13.408 slat (usec): min=15, max=229437, avg=2916.94, stdev=14995.25 00:26:13.408 clat (usec): min=1497, max=946111, avg=223434.08, stdev=231561.24 00:26:13.408 lat (usec): min=1549, max=946139, avg=226351.02, stdev=235138.52 00:26:13.408 clat percentiles (msec): 00:26:13.408 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:26:13.408 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 89], 60.00th=[ 211], 00:26:13.408 | 70.00th=[ 313], 80.00th=[ 401], 90.00th=[ 617], 95.00th=[ 709], 00:26:13.408 | 99.00th=[ 827], 99.50th=[ 894], 99.90th=[ 944], 99.95th=[ 944], 00:26:13.408 | 99.99th=[ 944] 00:26:13.408 bw ( KiB/s): min= 9728, max=360448, per=8.00%, avg=71017.45, stdev=97854.94, samples=20 00:26:13.408 iops : min= 38, max= 1408, avg=277.40, stdev=382.25, samples=20 00:26:13.408 lat (msec) : 2=0.04%, 50=42.37%, 100=10.29%, 250=9.31%, 500=21.96% 00:26:13.408 lat (msec) : 750=12.69%, 1000=3.35% 00:26:13.408 cpu : usr=0.15%, sys=1.03%, ctx=488, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=2837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job7: (groupid=0, jobs=1): err= 0: pid=1057617: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=161, BW=40.3MiB/s (42.3MB/s)(409MiB/10134msec) 00:26:13.408 slat (usec): min=11, max=280473, avg=3235.79, stdev=19650.74 00:26:13.408 clat (usec): min=1566, max=950947, avg=393279.76, stdev=243621.49 00:26:13.408 lat (msec): min=2, max=950, avg=396.52, stdev=246.09 00:26:13.408 clat percentiles (msec): 00:26:13.408 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 26], 20.00th=[ 117], 00:26:13.408 | 30.00th=[ 251], 40.00th=[ 347], 50.00th=[ 405], 60.00th=[ 489], 00:26:13.408 | 70.00th=[ 550], 80.00th=[ 625], 90.00th=[ 718], 95.00th=[ 768], 00:26:13.408 | 99.00th=[ 835], 99.50th=[ 860], 99.90th=[ 944], 99.95th=[ 953], 00:26:13.408 | 99.99th=[ 953] 00:26:13.408 bw ( KiB/s): min=17408, max=100864, per=4.53%, avg=40192.00, stdev=21069.41, samples=20 00:26:13.408 iops : min= 68, max= 394, avg=157.00, stdev=82.30, samples=20 00:26:13.408 lat (msec) : 2=0.06%, 4=0.24%, 10=4.77%, 20=2.39%, 50=7.59% 00:26:13.408 lat (msec) : 100=3.06%, 250=10.71%, 500=33.60%, 750=30.48%, 1000=7.10% 00:26:13.408 cpu : usr=0.05%, sys=0.63%, ctx=377, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=1634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job8: (groupid=0, jobs=1): err= 0: pid=1057675: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=173, BW=43.5MiB/s (45.6MB/s)(441MiB/10134msec) 00:26:13.408 slat (usec): min=21, max=328018, avg=4049.00, stdev=22441.51 00:26:13.408 clat (usec): min=673, max=1065.0k, avg=363676.40, stdev=263801.00 00:26:13.408 lat (usec): min=702, max=1065.0k, avg=367725.41, stdev=266557.93 00:26:13.408 clat percentiles (usec): 00:26:13.408 | 1.00th=[ 1729], 5.00th=[ 7046], 10.00th=[ 9503], 00:26:13.408 | 20.00th=[ 68682], 30.00th=[ 154141], 40.00th=[ 252707], 00:26:13.408 | 50.00th=[ 325059], 60.00th=[ 488637], 70.00th=[ 566232], 00:26:13.408 | 80.00th=[ 633340], 90.00th=[ 700449], 95.00th=[ 767558], 00:26:13.408 | 99.00th=[ 868221], 99.50th=[ 893387], 99.90th=[ 926942], 00:26:13.408 | 99.95th=[1061159], 99.99th=[1061159] 00:26:13.408 bw ( KiB/s): min=12800, max=198144, per=4.90%, avg=43473.65, stdev=41395.86, samples=20 00:26:13.408 iops : min= 50, max= 774, avg=169.80, stdev=161.70, samples=20 00:26:13.408 lat (usec) : 750=0.17%, 1000=0.28% 00:26:13.408 lat (msec) : 2=0.68%, 4=0.40%, 10=9.93%, 20=5.22%, 50=1.19% 00:26:13.408 lat (msec) : 100=5.96%, 250=14.53%, 500=23.89%, 750=30.87%, 1000=6.81% 00:26:13.408 lat (msec) : 2000=0.06% 00:26:13.408 cpu : usr=0.07%, sys=0.64%, ctx=425, majf=0, minf=4097 00:26:13.408 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:26:13.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.408 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.408 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.408 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.408 job9: (groupid=0, jobs=1): err= 0: pid=1057701: Sun Dec 15 06:16:31 2024 00:26:13.408 read: IOPS=374, BW=93.7MiB/s (98.2MB/s)(949MiB/10133msec) 00:26:13.408 slat (usec): min=7, max=230616, avg=1965.13, stdev=12832.79 00:26:13.408 clat (usec): min=570, max=1096.3k, avg=168691.53, stdev=223364.67 00:26:13.408 lat (usec): min=601, max=1278.7k, avg=170656.66, stdev=226146.39 00:26:13.408 clat percentiles (usec): 00:26:13.408 | 1.00th=[ 1401], 5.00th=[ 20317], 10.00th=[ 25822], 00:26:13.408 | 20.00th=[ 28705], 30.00th=[ 31065], 40.00th=[ 42206], 00:26:13.408 | 50.00th=[ 54789], 60.00th=[ 90702], 70.00th=[ 179307], 00:26:13.408 | 80.00th=[ 274727], 90.00th=[ 530580], 95.00th=[ 692061], 00:26:13.408 | 99.00th=[ 977273], 99.50th=[1061159], 99.90th=[1098908], 00:26:13.408 | 99.95th=[1098908], 99.99th=[1098908] 00:26:13.408 bw ( KiB/s): min=13824, max=422400, per=10.76%, avg=95539.20, stdev=120480.19, samples=20 00:26:13.408 iops : min= 54, max= 1650, avg=373.20, stdev=470.63, samples=20 00:26:13.408 lat (usec) : 750=0.11%, 1000=0.24% 00:26:13.408 lat (msec) : 2=1.00%, 4=0.55%, 10=1.40%, 20=1.63%, 50=41.86% 00:26:13.408 lat (msec) : 100=15.83%, 250=15.86%, 500=10.14%, 750=8.32%, 1000=2.27% 00:26:13.408 lat (msec) : 2000=0.79% 00:26:13.409 cpu : usr=0.16%, sys=1.23%, ctx=865, majf=0, minf=4097 00:26:13.409 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:13.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.409 issued rwts: total=3796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.409 job10: (groupid=0, jobs=1): err= 0: pid=1057721: Sun Dec 15 06:16:31 2024 00:26:13.409 read: IOPS=251, BW=62.9MiB/s (65.9MB/s)(635MiB/10097msec) 00:26:13.409 slat (usec): min=15, max=395684, avg=2986.71, stdev=15233.74 00:26:13.409 clat (msec): min=11, max=1050, avg=251.17, stdev=184.82 00:26:13.409 lat (msec): min=12, max=1050, avg=254.16, stdev=186.63 00:26:13.409 clat percentiles (msec): 00:26:13.409 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 73], 20.00th=[ 100], 00:26:13.409 | 30.00th=[ 133], 40.00th=[ 171], 50.00th=[ 201], 60.00th=[ 241], 00:26:13.409 | 70.00th=[ 292], 80.00th=[ 368], 90.00th=[ 542], 95.00th=[ 651], 00:26:13.409 | 99.00th=[ 776], 99.50th=[ 919], 99.90th=[ 969], 99.95th=[ 969], 00:26:13.409 | 99.99th=[ 1053] 00:26:13.409 bw ( KiB/s): min=16896, max=150528, per=7.14%, avg=63385.60, stdev=37355.36, samples=20 00:26:13.409 iops : min= 66, max= 588, avg=247.60, stdev=145.92, samples=20 00:26:13.409 lat (msec) : 20=0.47%, 50=6.34%, 100=13.54%, 250=41.46%, 500=26.73% 00:26:13.409 lat (msec) : 750=9.72%, 1000=1.69%, 2000=0.04% 00:26:13.409 cpu : usr=0.12%, sys=0.96%, ctx=491, majf=0, minf=4097 00:26:13.409 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:13.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.409 issued rwts: total=2540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.409 00:26:13.409 Run status group 0 (all jobs): 00:26:13.409 READ: bw=867MiB/s (909MB/s), 40.3MiB/s-129MiB/s (42.3MB/s-136MB/s), io=8788MiB (9215MB), run=10014-10134msec 00:26:13.409 00:26:13.409 Disk stats (read/write): 00:26:13.409 nvme0n1: ios=6806/0, merge=0/0, ticks=1246683/0, in_queue=1246683, util=94.43% 00:26:13.409 nvme10n1: ios=5652/0, merge=0/0, ticks=1243911/0, in_queue=1243911, util=94.84% 00:26:13.409 nvme1n1: ios=9705/0, merge=0/0, ticks=1240878/0, in_queue=1240878, util=95.37% 00:26:13.409 nvme2n1: ios=9761/0, merge=0/0, ticks=1221727/0, in_queue=1221727, util=95.72% 00:26:13.409 nvme3n1: ios=5337/0, merge=0/0, ticks=1247863/0, in_queue=1247863, util=96.02% 00:26:13.409 nvme4n1: ios=6732/0, merge=0/0, ticks=1222477/0, in_queue=1222477, util=96.77% 00:26:13.409 nvme5n1: ios=5196/0, merge=0/0, ticks=1237177/0, in_queue=1237177, util=97.12% 00:26:13.409 nvme6n1: ios=3176/0, merge=0/0, ticks=1247902/0, in_queue=1247902, util=97.53% 00:26:13.409 nvme7n1: ios=3444/0, merge=0/0, ticks=1244081/0, in_queue=1244081, util=98.48% 00:26:13.409 nvme8n1: ios=7525/0, merge=0/0, ticks=1250321/0, in_queue=1250321, util=98.95% 00:26:13.409 nvme9n1: ios=4893/0, merge=0/0, ticks=1218730/0, in_queue=1218730, util=99.23% 00:26:13.409 06:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:13.409 [global] 00:26:13.409 thread=1 00:26:13.409 invalidate=1 00:26:13.409 rw=randwrite 00:26:13.409 time_based=1 00:26:13.409 runtime=10 00:26:13.409 ioengine=libaio 00:26:13.409 direct=1 00:26:13.409 bs=262144 00:26:13.409 iodepth=64 00:26:13.409 norandommap=1 00:26:13.409 numjobs=1 00:26:13.409 00:26:13.409 [job0] 00:26:13.409 filename=/dev/nvme0n1 00:26:13.409 [job1] 00:26:13.409 filename=/dev/nvme10n1 00:26:13.409 [job2] 00:26:13.409 filename=/dev/nvme1n1 00:26:13.409 [job3] 00:26:13.409 filename=/dev/nvme2n1 00:26:13.409 [job4] 00:26:13.409 filename=/dev/nvme3n1 00:26:13.409 [job5] 00:26:13.409 filename=/dev/nvme4n1 00:26:13.409 [job6] 00:26:13.409 filename=/dev/nvme5n1 00:26:13.409 [job7] 00:26:13.409 filename=/dev/nvme6n1 00:26:13.409 [job8] 00:26:13.409 filename=/dev/nvme7n1 00:26:13.409 [job9] 00:26:13.409 filename=/dev/nvme8n1 00:26:13.409 [job10] 00:26:13.409 filename=/dev/nvme9n1 00:26:13.409 Could not set queue depth (nvme0n1) 00:26:13.409 Could not set queue depth (nvme10n1) 00:26:13.409 Could not set queue depth (nvme1n1) 00:26:13.409 Could not set queue depth (nvme2n1) 00:26:13.409 Could not set queue depth (nvme3n1) 00:26:13.409 Could not set queue depth (nvme4n1) 00:26:13.409 Could not set queue depth (nvme5n1) 00:26:13.409 Could not set queue depth (nvme6n1) 00:26:13.409 Could not set queue depth (nvme7n1) 00:26:13.409 Could not set queue depth (nvme8n1) 00:26:13.409 Could not set queue depth (nvme9n1) 00:26:13.409 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:13.409 fio-3.35 00:26:13.409 Starting 11 threads 00:26:23.368 00:26:23.368 job0: (groupid=0, jobs=1): err= 0: pid=1058818: Sun Dec 15 06:16:43 2024 00:26:23.368 write: IOPS=295, BW=74.0MiB/s (77.6MB/s)(750MiB/10134msec); 0 zone resets 00:26:23.368 slat (usec): min=20, max=45836, avg=2040.52, stdev=6304.34 00:26:23.368 clat (usec): min=619, max=622599, avg=214203.88, stdev=141838.62 00:26:23.368 lat (usec): min=655, max=626299, avg=216244.40, stdev=143293.05 00:26:23.368 clat percentiles (usec): 00:26:23.368 | 1.00th=[ 1663], 5.00th=[ 7046], 10.00th=[ 43254], 20.00th=[ 84411], 00:26:23.368 | 30.00th=[119014], 40.00th=[132645], 50.00th=[185598], 60.00th=[248513], 00:26:23.368 | 70.00th=[316670], 80.00th=[346031], 90.00th=[413139], 95.00th=[459277], 00:26:23.368 | 99.00th=[549454], 99.50th=[574620], 99.90th=[608175], 99.95th=[608175], 00:26:23.368 | 99.99th=[624952] 00:26:23.368 bw ( KiB/s): min=31232, max=146432, per=7.02%, avg=75136.00, stdev=34579.36, samples=20 00:26:23.368 iops : min= 122, max= 572, avg=293.50, stdev=135.08, samples=20 00:26:23.368 lat (usec) : 750=0.10%, 1000=0.23% 00:26:23.368 lat (msec) : 2=1.10%, 4=1.90%, 10=2.97%, 20=0.90%, 50=4.10% 00:26:23.368 lat (msec) : 100=11.61%, 250=37.16%, 500=37.22%, 750=2.70% 00:26:23.368 cpu : usr=0.76%, sys=0.97%, ctx=1838, majf=0, minf=1 00:26:23.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:23.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.368 issued rwts: total=0,2998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.368 job1: (groupid=0, jobs=1): err= 0: pid=1058830: Sun Dec 15 06:16:43 2024 00:26:23.368 write: IOPS=499, BW=125MiB/s (131MB/s)(1271MiB/10168msec); 0 zone resets 00:26:23.368 slat (usec): min=20, max=80367, avg=1329.61, stdev=4827.85 00:26:23.368 clat (usec): min=725, max=770399, avg=126635.96, stdev=144210.30 00:26:23.368 lat (usec): min=758, max=774100, avg=127965.57, stdev=145691.50 00:26:23.368 clat percentiles (usec): 00:26:23.368 | 1.00th=[ 1500], 5.00th=[ 4178], 10.00th=[ 9896], 20.00th=[ 25035], 00:26:23.368 | 30.00th=[ 40109], 40.00th=[ 48497], 50.00th=[ 52691], 60.00th=[ 76022], 00:26:23.368 | 70.00th=[129500], 80.00th=[252707], 90.00th=[362808], 95.00th=[429917], 00:26:23.368 | 99.00th=[517997], 99.50th=[616563], 99.90th=[742392], 99.95th=[759170], 00:26:23.368 | 99.99th=[767558] 00:26:23.368 bw ( KiB/s): min=34816, max=385024, per=12.00%, avg=128501.55, stdev=106554.71, samples=20 00:26:23.368 iops : min= 136, max= 1504, avg=501.95, stdev=416.23, samples=20 00:26:23.368 lat (usec) : 750=0.06%, 1000=0.33% 00:26:23.368 lat (msec) : 2=1.24%, 4=3.25%, 10=5.25%, 20=6.14%, 50=26.83% 00:26:23.368 lat (msec) : 100=22.49%, 250=14.32%, 500=18.16%, 750=1.87%, 1000=0.06% 00:26:23.368 cpu : usr=1.02%, sys=1.45%, ctx=2981, majf=0, minf=1 00:26:23.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:23.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.368 issued rwts: total=0,5083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.368 job2: (groupid=0, jobs=1): err= 0: pid=1058831: Sun Dec 15 06:16:43 2024 00:26:23.368 write: IOPS=301, BW=75.5MiB/s (79.1MB/s)(761MiB/10083msec); 0 zone resets 00:26:23.368 slat (usec): min=20, max=169553, avg=2690.04, stdev=7670.94 00:26:23.368 clat (usec): min=912, max=681352, avg=209239.00, stdev=157646.92 00:26:23.368 lat (usec): min=952, max=681418, avg=211929.04, stdev=159547.54 00:26:23.368 clat percentiles (msec): 00:26:23.368 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 34], 20.00th=[ 45], 00:26:23.368 | 30.00th=[ 93], 40.00th=[ 144], 50.00th=[ 182], 60.00th=[ 209], 00:26:23.368 | 70.00th=[ 284], 80.00th=[ 347], 90.00th=[ 447], 95.00th=[ 523], 00:26:23.368 | 99.00th=[ 642], 99.50th=[ 659], 99.90th=[ 676], 99.95th=[ 684], 00:26:23.368 | 99.99th=[ 684] 00:26:23.368 bw ( KiB/s): min=26624, max=291840, per=7.13%, avg=76324.35, stdev=58884.84, samples=20 00:26:23.368 iops : min= 104, max= 1140, avg=298.10, stdev=230.00, samples=20 00:26:23.368 lat (usec) : 1000=0.10% 00:26:23.368 lat (msec) : 2=0.49%, 4=1.54%, 10=2.46%, 20=2.46%, 50=14.59% 00:26:23.368 lat (msec) : 100=9.30%, 250=34.00%, 500=28.65%, 750=6.41% 00:26:23.368 cpu : usr=0.68%, sys=1.20%, ctx=1346, majf=0, minf=1 00:26:23.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:23.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.368 issued rwts: total=0,3044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.368 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.368 job3: (groupid=0, jobs=1): err= 0: pid=1058832: Sun Dec 15 06:16:43 2024 00:26:23.368 write: IOPS=431, BW=108MiB/s (113MB/s)(1096MiB/10170msec); 0 zone resets 00:26:23.368 slat (usec): min=29, max=113644, avg=1895.56, stdev=4918.31 00:26:23.368 clat (usec): min=1064, max=477765, avg=146469.48, stdev=106488.45 00:26:23.368 lat (usec): min=1097, max=477817, avg=148365.03, stdev=107527.59 00:26:23.368 clat percentiles (msec): 00:26:23.368 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 54], 00:26:23.368 | 30.00th=[ 81], 40.00th=[ 93], 50.00th=[ 124], 60.00th=[ 148], 00:26:23.368 | 70.00th=[ 182], 80.00th=[ 207], 90.00th=[ 326], 95.00th=[ 372], 00:26:23.368 | 99.00th=[ 456], 99.50th=[ 464], 99.90th=[ 472], 99.95th=[ 472], 00:26:23.368 | 99.99th=[ 477] 00:26:23.368 bw ( KiB/s): min=34816, max=266240, per=10.33%, avg=110621.75, stdev=64417.31, samples=20 00:26:23.368 iops : min= 136, max= 1040, avg=432.10, stdev=251.65, samples=20 00:26:23.368 lat (msec) : 2=0.30%, 4=0.75%, 10=2.65%, 20=1.12%, 50=12.77% 00:26:23.368 lat (msec) : 100=24.63%, 250=42.62%, 500=15.17% 00:26:23.368 cpu : usr=0.95%, sys=1.20%, ctx=1601, majf=0, minf=2 00:26:23.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:23.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.368 issued rwts: total=0,4385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job4: (groupid=0, jobs=1): err= 0: pid=1058833: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=491, BW=123MiB/s (129MB/s)(1249MiB/10162msec); 0 zone resets 00:26:23.369 slat (usec): min=28, max=112537, avg=1683.80, stdev=5131.16 00:26:23.369 clat (usec): min=1211, max=514602, avg=128455.97, stdev=113355.88 00:26:23.369 lat (usec): min=1252, max=514672, avg=130139.77, stdev=114770.96 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 43], 00:26:23.369 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 85], 60.00th=[ 107], 00:26:23.369 | 70.00th=[ 153], 80.00th=[ 201], 90.00th=[ 313], 95.00th=[ 397], 00:26:23.369 | 99.00th=[ 477], 99.50th=[ 489], 99.90th=[ 498], 99.95th=[ 510], 00:26:23.369 | 99.99th=[ 514] 00:26:23.369 bw ( KiB/s): min=32768, max=337408, per=11.79%, avg=126244.20, stdev=87862.00, samples=20 00:26:23.369 iops : min= 128, max= 1318, avg=493.10, stdev=343.22, samples=20 00:26:23.369 lat (msec) : 2=0.26%, 4=0.42%, 10=1.00%, 20=1.60%, 50=20.76% 00:26:23.369 lat (msec) : 100=33.00%, 250=30.50%, 500=12.39%, 750=0.06% 00:26:23.369 cpu : usr=1.26%, sys=1.57%, ctx=1910, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,4994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job5: (groupid=0, jobs=1): err= 0: pid=1058834: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=447, BW=112MiB/s (117MB/s)(1138MiB/10173msec); 0 zone resets 00:26:23.369 slat (usec): min=25, max=116626, avg=1458.58, stdev=5101.09 00:26:23.369 clat (usec): min=1115, max=645517, avg=141518.16, stdev=123026.58 00:26:23.369 lat (usec): min=1416, max=645576, avg=142976.74, stdev=124265.57 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 13], 20.00th=[ 36], 00:26:23.369 | 30.00th=[ 54], 40.00th=[ 79], 50.00th=[ 115], 60.00th=[ 155], 00:26:23.369 | 70.00th=[ 186], 80.00th=[ 226], 90.00th=[ 326], 95.00th=[ 393], 00:26:23.369 | 99.00th=[ 514], 99.50th=[ 584], 99.90th=[ 634], 99.95th=[ 634], 00:26:23.369 | 99.99th=[ 642] 00:26:23.369 bw ( KiB/s): min=47616, max=247808, per=10.73%, avg=114901.00, stdev=54130.17, samples=20 00:26:23.369 iops : min= 186, max= 968, avg=448.80, stdev=211.47, samples=20 00:26:23.369 lat (msec) : 2=0.46%, 4=3.41%, 10=4.44%, 20=4.94%, 50=15.77% 00:26:23.369 lat (msec) : 100=18.08%, 250=37.02%, 500=14.59%, 750=1.30% 00:26:23.369 cpu : usr=0.87%, sys=1.58%, ctx=2951, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,4552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job6: (groupid=0, jobs=1): err= 0: pid=1058837: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=435, BW=109MiB/s (114MB/s)(1107MiB/10163msec); 0 zone resets 00:26:23.369 slat (usec): min=18, max=67602, avg=1885.35, stdev=4965.71 00:26:23.369 clat (usec): min=1082, max=542318, avg=144947.10, stdev=101554.01 00:26:23.369 lat (usec): min=1710, max=542382, avg=146832.45, stdev=102922.21 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 46], 20.00th=[ 54], 00:26:23.369 | 30.00th=[ 83], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 150], 00:26:23.369 | 70.00th=[ 182], 80.00th=[ 201], 90.00th=[ 300], 95.00th=[ 359], 00:26:23.369 | 99.00th=[ 481], 99.50th=[ 510], 99.90th=[ 542], 99.95th=[ 542], 00:26:23.369 | 99.99th=[ 542] 00:26:23.369 bw ( KiB/s): min=30720, max=341504, per=10.43%, avg=111667.20, stdev=71539.91, samples=20 00:26:23.369 iops : min= 120, max= 1334, avg=436.20, stdev=279.45, samples=20 00:26:23.369 lat (msec) : 2=0.05%, 4=0.27%, 10=0.95%, 20=1.38%, 50=10.62% 00:26:23.369 lat (msec) : 100=25.26%, 250=47.97%, 500=12.88%, 750=0.63% 00:26:23.369 cpu : usr=1.00%, sys=1.50%, ctx=1902, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,4426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job7: (groupid=0, jobs=1): err= 0: pid=1058838: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=325, BW=81.5MiB/s (85.4MB/s)(829MiB/10170msec); 0 zone resets 00:26:23.369 slat (usec): min=30, max=120805, avg=2024.37, stdev=6410.43 00:26:23.369 clat (usec): min=883, max=597908, avg=194176.81, stdev=132741.97 00:26:23.369 lat (usec): min=1294, max=605565, avg=196201.18, stdev=134149.25 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 47], 20.00th=[ 89], 00:26:23.369 | 30.00th=[ 96], 40.00th=[ 129], 50.00th=[ 169], 60.00th=[ 201], 00:26:23.369 | 70.00th=[ 255], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 456], 00:26:23.369 | 99.00th=[ 527], 99.50th=[ 558], 99.90th=[ 584], 99.95th=[ 584], 00:26:23.369 | 99.99th=[ 600] 00:26:23.369 bw ( KiB/s): min=34816, max=169984, per=7.77%, avg=83265.45, stdev=39516.38, samples=20 00:26:23.369 iops : min= 136, max= 664, avg=325.25, stdev=154.35, samples=20 00:26:23.369 lat (usec) : 1000=0.03% 00:26:23.369 lat (msec) : 2=0.15%, 4=1.66%, 10=3.65%, 20=0.94%, 50=3.83% 00:26:23.369 lat (msec) : 100=24.16%, 250=34.03%, 500=29.56%, 750=1.99% 00:26:23.369 cpu : usr=0.86%, sys=0.99%, ctx=1860, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,3315,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job8: (groupid=0, jobs=1): err= 0: pid=1058840: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=289, BW=72.3MiB/s (75.8MB/s)(735MiB/10167msec); 0 zone resets 00:26:23.369 slat (usec): min=22, max=175005, avg=2397.24, stdev=8286.23 00:26:23.369 clat (usec): min=636, max=769123, avg=218764.98, stdev=165226.34 00:26:23.369 lat (usec): min=674, max=775606, avg=221162.22, stdev=167140.20 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 12], 20.00th=[ 43], 00:26:23.369 | 30.00th=[ 90], 40.00th=[ 133], 50.00th=[ 207], 60.00th=[ 279], 00:26:23.369 | 70.00th=[ 338], 80.00th=[ 380], 90.00th=[ 418], 95.00th=[ 477], 00:26:23.369 | 99.00th=[ 667], 99.50th=[ 693], 99.90th=[ 760], 99.95th=[ 760], 00:26:23.369 | 99.99th=[ 768] 00:26:23.369 bw ( KiB/s): min=28672, max=175616, per=6.87%, avg=73625.60, stdev=41165.16, samples=20 00:26:23.369 iops : min= 112, max= 686, avg=287.60, stdev=160.80, samples=20 00:26:23.369 lat (usec) : 750=0.07%, 1000=0.17% 00:26:23.369 lat (msec) : 2=0.54%, 4=2.24%, 10=5.44%, 20=6.02%, 50=6.73% 00:26:23.369 lat (msec) : 100=12.07%, 250=20.99%, 500=41.97%, 750=3.64%, 1000=0.10% 00:26:23.369 cpu : usr=0.70%, sys=0.91%, ctx=1896, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,2940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.369 job9: (groupid=0, jobs=1): err= 0: pid=1058847: Sun Dec 15 06:16:43 2024 00:26:23.369 write: IOPS=350, BW=87.7MiB/s (92.0MB/s)(884MiB/10083msec); 0 zone resets 00:26:23.369 slat (usec): min=27, max=64640, avg=2180.64, stdev=5836.55 00:26:23.369 clat (usec): min=804, max=660016, avg=180154.99, stdev=130246.50 00:26:23.369 lat (usec): min=831, max=665392, avg=182335.62, stdev=131706.85 00:26:23.369 clat percentiles (msec): 00:26:23.369 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 83], 00:26:23.369 | 30.00th=[ 118], 40.00th=[ 133], 50.00th=[ 157], 60.00th=[ 184], 00:26:23.369 | 70.00th=[ 194], 80.00th=[ 268], 90.00th=[ 393], 95.00th=[ 443], 00:26:23.369 | 99.00th=[ 592], 99.50th=[ 625], 99.90th=[ 651], 99.95th=[ 651], 00:26:23.369 | 99.99th=[ 659] 00:26:23.369 bw ( KiB/s): min=38912, max=150528, per=8.30%, avg=88939.90, stdev=37576.10, samples=20 00:26:23.369 iops : min= 152, max= 588, avg=347.40, stdev=146.80, samples=20 00:26:23.369 lat (usec) : 1000=0.14% 00:26:23.369 lat (msec) : 2=0.40%, 4=1.53%, 10=4.30%, 20=3.76%, 50=5.57% 00:26:23.369 lat (msec) : 100=9.30%, 250=54.23%, 500=18.35%, 750=2.43% 00:26:23.369 cpu : usr=0.75%, sys=1.14%, ctx=1803, majf=0, minf=1 00:26:23.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:23.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.369 issued rwts: total=0,3537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.369 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.370 job10: (groupid=0, jobs=1): err= 0: pid=1058848: Sun Dec 15 06:16:43 2024 00:26:23.370 write: IOPS=323, BW=80.8MiB/s (84.7MB/s)(821MiB/10166msec); 0 zone resets 00:26:23.370 slat (usec): min=29, max=155876, avg=2360.25, stdev=7306.01 00:26:23.370 clat (msec): min=2, max=680, avg=195.54, stdev=141.68 00:26:23.370 lat (msec): min=2, max=680, avg=197.91, stdev=143.34 00:26:23.370 clat percentiles (msec): 00:26:23.370 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 48], 20.00th=[ 87], 00:26:23.370 | 30.00th=[ 96], 40.00th=[ 113], 50.00th=[ 157], 60.00th=[ 192], 00:26:23.370 | 70.00th=[ 249], 80.00th=[ 334], 90.00th=[ 397], 95.00th=[ 439], 00:26:23.370 | 99.00th=[ 634], 99.50th=[ 659], 99.90th=[ 676], 99.95th=[ 676], 00:26:23.370 | 99.99th=[ 684] 00:26:23.370 bw ( KiB/s): min=24576, max=169984, per=7.70%, avg=82467.50, stdev=38268.78, samples=20 00:26:23.370 iops : min= 96, max= 664, avg=322.10, stdev=149.47, samples=20 00:26:23.370 lat (msec) : 4=0.61%, 10=1.04%, 20=2.65%, 50=6.06%, 100=24.51% 00:26:23.370 lat (msec) : 250=35.37%, 500=26.33%, 750=3.44% 00:26:23.370 cpu : usr=0.71%, sys=1.18%, ctx=1537, majf=0, minf=1 00:26:23.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:23.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.370 issued rwts: total=0,3285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.370 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.370 00:26:23.370 Run status group 0 (all jobs): 00:26:23.370 WRITE: bw=1046MiB/s (1097MB/s), 72.3MiB/s-125MiB/s (75.8MB/s-131MB/s), io=10.4GiB (11.2GB), run=10083-10173msec 00:26:23.370 00:26:23.370 Disk stats (read/write): 00:26:23.370 nvme0n1: ios=49/5969, merge=0/0, ticks=39/1245779, in_queue=1245818, util=94.61% 00:26:23.370 nvme10n1: ios=22/10110, merge=0/0, ticks=28/1236926, in_queue=1236954, util=94.84% 00:26:23.370 nvme1n1: ios=0/5846, merge=0/0, ticks=0/1208212, in_queue=1208212, util=95.22% 00:26:23.370 nvme2n1: ios=5/8711, merge=0/0, ticks=210/1231829, in_queue=1232039, util=95.78% 00:26:23.370 nvme3n1: ios=41/9936, merge=0/0, ticks=888/1227444, in_queue=1228332, util=100.00% 00:26:23.370 nvme4n1: ios=0/9025, merge=0/0, ticks=0/1233946, in_queue=1233946, util=96.62% 00:26:23.370 nvme5n1: ios=42/8793, merge=0/0, ticks=2203/1230557, in_queue=1232760, util=99.85% 00:26:23.370 nvme6n1: ios=36/6570, merge=0/0, ticks=647/1237122, in_queue=1237769, util=100.00% 00:26:23.370 nvme7n1: ios=35/5822, merge=0/0, ticks=685/1239722, in_queue=1240407, util=99.97% 00:26:23.370 nvme8n1: ios=33/6832, merge=0/0, ticks=1194/1207699, in_queue=1208893, util=99.89% 00:26:23.370 nvme9n1: ios=39/6514, merge=0/0, ticks=4577/1224053, in_queue=1228630, util=99.94% 00:26:23.370 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:23.370 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:23.370 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.370 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:23.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.627 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:23.627 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.627 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.627 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:23.627 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.628 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:23.886 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.886 06:16:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:24.144 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.144 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:24.402 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.402 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:24.968 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.968 06:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:24.968 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.968 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:25.226 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.226 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:25.484 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:25.484 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.484 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:25.742 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.742 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:26.000 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:26.000 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:26.000 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.000 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.000 06:16:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.000 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.001 rmmod nvme_tcp 00:26:26.001 rmmod nvme_fabrics 00:26:26.001 rmmod nvme_keyring 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1051218 ']' 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1051218 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1051218 ']' 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1051218 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.001 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051218 00:26:26.258 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.258 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.259 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051218' 00:26:26.259 killing process with pid 1051218 00:26:26.259 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1051218 00:26:26.259 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1051218 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.518 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:29.057 00:26:29.057 real 1m11.447s 00:26:29.057 user 4m17.735s 00:26:29.057 sys 0m17.841s 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.057 ************************************ 00:26:29.057 END TEST nvmf_multiconnection 00:26:29.057 ************************************ 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:29.057 ************************************ 00:26:29.057 START TEST nvmf_initiator_timeout 00:26:29.057 ************************************ 00:26:29.057 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:29.057 * Looking for test storage... 00:26:29.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:29.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.058 --rc genhtml_branch_coverage=1 00:26:29.058 --rc genhtml_function_coverage=1 00:26:29.058 --rc genhtml_legend=1 00:26:29.058 --rc geninfo_all_blocks=1 00:26:29.058 --rc geninfo_unexecuted_blocks=1 00:26:29.058 00:26:29.058 ' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:29.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.058 --rc genhtml_branch_coverage=1 00:26:29.058 --rc genhtml_function_coverage=1 00:26:29.058 --rc genhtml_legend=1 00:26:29.058 --rc geninfo_all_blocks=1 00:26:29.058 --rc geninfo_unexecuted_blocks=1 00:26:29.058 00:26:29.058 ' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:29.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.058 --rc genhtml_branch_coverage=1 00:26:29.058 --rc genhtml_function_coverage=1 00:26:29.058 --rc genhtml_legend=1 00:26:29.058 --rc geninfo_all_blocks=1 00:26:29.058 --rc geninfo_unexecuted_blocks=1 00:26:29.058 00:26:29.058 ' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:29.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.058 --rc genhtml_branch_coverage=1 00:26:29.058 --rc genhtml_function_coverage=1 00:26:29.058 --rc genhtml_legend=1 00:26:29.058 --rc geninfo_all_blocks=1 00:26:29.058 --rc geninfo_unexecuted_blocks=1 00:26:29.058 00:26:29.058 ' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.058 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.059 06:16:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:35.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:35.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:35.630 Found net devices under 0000:af:00.0: cvl_0_0 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:35.630 Found net devices under 0000:af:00.1: cvl_0_1 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.630 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:26:35.631 00:26:35.631 --- 10.0.0.2 ping statistics --- 00:26:35.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.631 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:26:35.631 00:26:35.631 --- 10.0.0.1 ping statistics --- 00:26:35.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.631 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1063924 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1063924 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1063924 ']' 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.631 06:16:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 [2024-12-15 06:16:54.847617] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:35.631 [2024-12-15 06:16:54.847660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.631 [2024-12-15 06:16:54.925907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.631 [2024-12-15 06:16:54.948742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.631 [2024-12-15 06:16:54.948779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.631 [2024-12-15 06:16:54.948787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.631 [2024-12-15 06:16:54.948793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.631 [2024-12-15 06:16:54.948797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.631 [2024-12-15 06:16:54.950247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.631 [2024-12-15 06:16:54.950356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.631 [2024-12-15 06:16:54.950454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.631 [2024-12-15 06:16:54.950457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 Malloc0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 Delay0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 [2024-12-15 06:16:55.137854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.631 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.632 [2024-12-15 06:16:55.163160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.632 06:16:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:36.198 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:36.198 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:36.198 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.198 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:36.198 06:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1064471 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:38.725 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:38.725 [global] 00:26:38.725 thread=1 00:26:38.725 invalidate=1 00:26:38.725 rw=write 00:26:38.725 time_based=1 00:26:38.725 runtime=60 00:26:38.725 ioengine=libaio 00:26:38.725 direct=1 00:26:38.725 bs=4096 00:26:38.725 iodepth=1 00:26:38.725 norandommap=0 00:26:38.725 numjobs=1 00:26:38.725 00:26:38.725 verify_dump=1 00:26:38.725 verify_backlog=512 00:26:38.725 verify_state_save=0 00:26:38.725 do_verify=1 00:26:38.725 verify=crc32c-intel 00:26:38.725 [job0] 00:26:38.725 filename=/dev/nvme0n1 00:26:38.725 Could not set queue depth (nvme0n1) 00:26:38.725 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:38.725 fio-3.35 00:26:38.725 Starting 1 thread 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.253 true 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.253 true 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.253 true 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.253 true 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.253 06:17:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:44.535 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:44.535 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.535 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.535 true 00:26:44.535 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.535 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.536 true 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.536 true 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.536 true 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:44.536 06:17:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1064471 00:27:40.899 00:27:40.899 job0: (groupid=0, jobs=1): err= 0: pid=1064740: Sun Dec 15 06:17:58 2024 00:27:40.899 read: IOPS=565, BW=2263KiB/s (2318kB/s)(133MiB/60000msec) 00:27:40.899 slat (nsec): min=3078, max=32672, avg=7278.30, stdev=1117.12 00:27:40.899 clat (usec): min=190, max=41621k, avg=1566.54, stdev=225898.60 00:27:40.899 lat (usec): min=197, max=41621k, avg=1573.82, stdev=225898.70 00:27:40.899 clat percentiles (usec): 00:27:40.899 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:27:40.899 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:27:40.899 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 269], 00:27:40.899 | 99.00th=[ 310], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:27:40.899 | 99.99th=[41157] 00:27:40.899 write: IOPS=571, BW=2287KiB/s (2342kB/s)(134MiB/60000msec); 0 zone resets 00:27:40.899 slat (usec): min=9, max=24029, avg=12.13, stdev=171.80 00:27:40.899 clat (usec): min=134, max=1025, avg=174.90, stdev=16.72 00:27:40.899 lat (usec): min=154, max=24334, avg=187.03, stdev=173.81 00:27:40.899 clat percentiles (usec): 00:27:40.899 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:27:40.899 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:27:40.899 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 200], 00:27:40.899 | 99.00th=[ 235], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 297], 00:27:40.899 | 99.99th=[ 343] 00:27:40.899 bw ( KiB/s): min= 4096, max=10728, per=100.00%, avg=8588.39, stdev=1723.28, samples=31 00:27:40.899 iops : min= 1024, max= 2682, avg=2147.10, stdev=430.82, samples=31 00:27:40.899 lat (usec) : 250=81.18%, 500=18.69%, 750=0.01%, 1000=0.01% 00:27:40.899 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.11%, >=2000=0.01% 00:27:40.899 cpu : usr=0.54%, sys=1.11%, ctx=68259, majf=0, minf=1 00:27:40.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.899 issued rwts: total=33948,34304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:40.899 00:27:40.899 Run status group 0 (all jobs): 00:27:40.899 READ: bw=2263KiB/s (2318kB/s), 2263KiB/s-2263KiB/s (2318kB/s-2318kB/s), io=133MiB (139MB), run=60000-60000msec 00:27:40.899 WRITE: bw=2287KiB/s (2342kB/s), 2287KiB/s-2287KiB/s (2342kB/s-2342kB/s), io=134MiB (141MB), run=60000-60000msec 00:27:40.899 00:27:40.899 Disk stats (read/write): 00:27:40.899 nvme0n1: ios=33951/33967, merge=0/0, ticks=14097/5762, in_queue=19859, util=99.86% 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:40.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:40.899 nvmf hotplug test: fio successful as expected 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.899 rmmod nvme_tcp 00:27:40.899 rmmod nvme_fabrics 00:27:40.899 rmmod nvme_keyring 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1063924 ']' 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1063924 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1063924 ']' 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1063924 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.899 06:17:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1063924 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1063924' 00:27:40.900 killing process with pid 1063924 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1063924 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1063924 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.900 06:17:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:41.467 00:27:41.467 real 1m12.632s 00:27:41.467 user 4m22.169s 00:27:41.467 sys 0m7.858s 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:41.467 ************************************ 00:27:41.467 END TEST nvmf_initiator_timeout 00:27:41.467 ************************************ 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.467 06:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:48.034 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:48.034 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.034 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:48.035 Found net devices under 0000:af:00.0: cvl_0_0 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:48.035 Found net devices under 0000:af:00.1: cvl_0_1 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:48.035 ************************************ 00:27:48.035 START TEST nvmf_perf_adq 00:27:48.035 ************************************ 00:27:48.035 06:18:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:48.035 * Looking for test storage... 00:27:48.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.035 --rc genhtml_branch_coverage=1 00:27:48.035 --rc genhtml_function_coverage=1 00:27:48.035 --rc genhtml_legend=1 00:27:48.035 --rc geninfo_all_blocks=1 00:27:48.035 --rc geninfo_unexecuted_blocks=1 00:27:48.035 00:27:48.035 ' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.035 --rc genhtml_branch_coverage=1 00:27:48.035 --rc genhtml_function_coverage=1 00:27:48.035 --rc genhtml_legend=1 00:27:48.035 --rc geninfo_all_blocks=1 00:27:48.035 --rc geninfo_unexecuted_blocks=1 00:27:48.035 00:27:48.035 ' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.035 --rc genhtml_branch_coverage=1 00:27:48.035 --rc genhtml_function_coverage=1 00:27:48.035 --rc genhtml_legend=1 00:27:48.035 --rc geninfo_all_blocks=1 00:27:48.035 --rc geninfo_unexecuted_blocks=1 00:27:48.035 00:27:48.035 ' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:48.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.035 --rc genhtml_branch_coverage=1 00:27:48.035 --rc genhtml_function_coverage=1 00:27:48.035 --rc genhtml_legend=1 00:27:48.035 --rc geninfo_all_blocks=1 00:27:48.035 --rc geninfo_unexecuted_blocks=1 00:27:48.035 00:27:48.035 ' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.035 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:48.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.036 06:18:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:53.308 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.308 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:53.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:53.309 Found net devices under 0000:af:00.0: cvl_0_0 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:53.309 Found net devices under 0000:af:00.1: cvl_0_1 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:53.309 06:18:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:53.876 06:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:56.409 06:18:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:01.686 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:01.686 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.686 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:01.687 Found net devices under 0000:af:00.0: cvl_0_0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:01.687 Found net devices under 0000:af:00.1: cvl_0_1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:28:01.687 00:28:01.687 --- 10.0.0.2 ping statistics --- 00:28:01.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.687 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:28:01.687 00:28:01.687 --- 10.0.0.1 ping statistics --- 00:28:01.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.687 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1082752 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1082752 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1082752 ']' 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.687 [2024-12-15 06:18:21.631274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:01.687 [2024-12-15 06:18:21.631316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.687 [2024-12-15 06:18:21.710510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.687 [2024-12-15 06:18:21.733109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.687 [2024-12-15 06:18:21.733145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.687 [2024-12-15 06:18:21.733152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.687 [2024-12-15 06:18:21.733158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.687 [2024-12-15 06:18:21.733163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.687 [2024-12-15 06:18:21.734533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.687 [2024-12-15 06:18:21.734647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.687 [2024-12-15 06:18:21.734742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.687 [2024-12-15 06:18:21.734743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:01.687 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.688 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.946 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.946 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 [2024-12-15 06:18:21.950559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 Malloc1 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.947 [2024-12-15 06:18:22.011331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1082789 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.947 06:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:04.473 "tick_rate": 2100000000, 00:28:04.473 "poll_groups": [ 00:28:04.473 { 00:28:04.473 "name": "nvmf_tgt_poll_group_000", 00:28:04.473 "admin_qpairs": 1, 00:28:04.473 "io_qpairs": 1, 00:28:04.473 "current_admin_qpairs": 1, 00:28:04.473 "current_io_qpairs": 1, 00:28:04.473 "pending_bdev_io": 0, 00:28:04.473 "completed_nvme_io": 20659, 00:28:04.473 "transports": [ 00:28:04.473 { 00:28:04.473 "trtype": "TCP" 00:28:04.473 } 00:28:04.473 ] 00:28:04.473 }, 00:28:04.473 { 00:28:04.473 "name": "nvmf_tgt_poll_group_001", 00:28:04.473 "admin_qpairs": 0, 00:28:04.473 "io_qpairs": 1, 00:28:04.473 "current_admin_qpairs": 0, 00:28:04.473 "current_io_qpairs": 1, 00:28:04.473 "pending_bdev_io": 0, 00:28:04.473 "completed_nvme_io": 20435, 00:28:04.473 "transports": [ 00:28:04.473 { 00:28:04.473 "trtype": "TCP" 00:28:04.473 } 00:28:04.473 ] 00:28:04.473 }, 00:28:04.473 { 00:28:04.473 "name": "nvmf_tgt_poll_group_002", 00:28:04.473 "admin_qpairs": 0, 00:28:04.473 "io_qpairs": 1, 00:28:04.473 "current_admin_qpairs": 0, 00:28:04.473 "current_io_qpairs": 1, 00:28:04.473 "pending_bdev_io": 0, 00:28:04.473 "completed_nvme_io": 20890, 00:28:04.473 "transports": [ 00:28:04.473 { 00:28:04.473 "trtype": "TCP" 00:28:04.473 } 00:28:04.473 ] 00:28:04.473 }, 00:28:04.473 { 00:28:04.473 "name": "nvmf_tgt_poll_group_003", 00:28:04.473 "admin_qpairs": 0, 00:28:04.473 "io_qpairs": 1, 00:28:04.473 "current_admin_qpairs": 0, 00:28:04.473 "current_io_qpairs": 1, 00:28:04.473 "pending_bdev_io": 0, 00:28:04.473 "completed_nvme_io": 20506, 00:28:04.473 "transports": [ 00:28:04.473 { 00:28:04.473 "trtype": "TCP" 00:28:04.473 } 00:28:04.473 ] 00:28:04.473 } 00:28:04.473 ] 00:28:04.473 }' 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:04.473 06:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1082789 00:28:12.581 Initializing NVMe Controllers 00:28:12.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:12.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:12.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:12.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:12.581 Initialization complete. Launching workers. 00:28:12.581 ======================================================== 00:28:12.581 Latency(us) 00:28:12.581 Device Information : IOPS MiB/s Average min max 00:28:12.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10625.40 41.51 6023.58 2148.52 10484.09 00:28:12.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10658.30 41.63 6004.54 1963.99 10049.24 00:28:12.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10815.00 42.25 5918.14 2117.80 9919.02 00:28:12.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10714.40 41.85 5972.88 2153.87 10257.97 00:28:12.581 ======================================================== 00:28:12.581 Total : 42813.09 167.24 5979.52 1963.99 10484.09 00:28:12.581 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.581 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.581 rmmod nvme_tcp 00:28:12.581 rmmod nvme_fabrics 00:28:12.581 rmmod nvme_keyring 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1082752 ']' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1082752 ']' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082752' 00:28:12.582 killing process with pid 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1082752 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.582 06:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.488 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:14.488 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:14.488 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:14.488 06:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:15.865 06:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:18.400 06:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:23.673 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.673 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:23.674 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:23.674 Found net devices under 0000:af:00.0: cvl_0_0 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:23.674 Found net devices under 0000:af:00.1: cvl_0_1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:28:23.674 00:28:23.674 --- 10.0.0.2 ping statistics --- 00:28:23.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.674 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:28:23.674 00:28:23.674 --- 10.0.0.1 ping statistics --- 00:28:23.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.674 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:23.674 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:23.933 net.core.busy_poll = 1 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:23.933 net.core.busy_read = 1 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:23.933 06:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1086607 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1086607 00:28:23.933 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:23.934 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1086607 ']' 00:28:23.934 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.934 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.192 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.192 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.192 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 [2024-12-15 06:18:44.119734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:24.193 [2024-12-15 06:18:44.119786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.193 [2024-12-15 06:18:44.197820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.193 [2024-12-15 06:18:44.221202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.193 [2024-12-15 06:18:44.221241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.193 [2024-12-15 06:18:44.221249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.193 [2024-12-15 06:18:44.221256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.193 [2024-12-15 06:18:44.221261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.193 [2024-12-15 06:18:44.222619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.193 [2024-12-15 06:18:44.222726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.193 [2024-12-15 06:18:44.222834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.193 [2024-12-15 06:18:44.222835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.193 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.451 [2024-12-15 06:18:44.451046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.451 Malloc1 00:28:24.451 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.452 [2024-12-15 06:18:44.510760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1086826 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:24.452 06:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:26.980 "tick_rate": 2100000000, 00:28:26.980 "poll_groups": [ 00:28:26.980 { 00:28:26.980 "name": "nvmf_tgt_poll_group_000", 00:28:26.980 "admin_qpairs": 1, 00:28:26.980 "io_qpairs": 2, 00:28:26.980 "current_admin_qpairs": 1, 00:28:26.980 "current_io_qpairs": 2, 00:28:26.980 "pending_bdev_io": 0, 00:28:26.980 "completed_nvme_io": 28245, 00:28:26.980 "transports": [ 00:28:26.980 { 00:28:26.980 "trtype": "TCP" 00:28:26.980 } 00:28:26.980 ] 00:28:26.980 }, 00:28:26.980 { 00:28:26.980 "name": "nvmf_tgt_poll_group_001", 00:28:26.980 "admin_qpairs": 0, 00:28:26.980 "io_qpairs": 2, 00:28:26.980 "current_admin_qpairs": 0, 00:28:26.980 "current_io_qpairs": 2, 00:28:26.980 "pending_bdev_io": 0, 00:28:26.980 "completed_nvme_io": 28099, 00:28:26.980 "transports": [ 00:28:26.980 { 00:28:26.980 "trtype": "TCP" 00:28:26.980 } 00:28:26.980 ] 00:28:26.980 }, 00:28:26.980 { 00:28:26.980 "name": "nvmf_tgt_poll_group_002", 00:28:26.980 "admin_qpairs": 0, 00:28:26.980 "io_qpairs": 0, 00:28:26.980 "current_admin_qpairs": 0, 00:28:26.980 "current_io_qpairs": 0, 00:28:26.980 "pending_bdev_io": 0, 00:28:26.980 "completed_nvme_io": 0, 00:28:26.980 "transports": [ 00:28:26.980 { 00:28:26.980 "trtype": "TCP" 00:28:26.980 } 00:28:26.980 ] 00:28:26.980 }, 00:28:26.980 { 00:28:26.980 "name": "nvmf_tgt_poll_group_003", 00:28:26.980 "admin_qpairs": 0, 00:28:26.980 "io_qpairs": 0, 00:28:26.980 "current_admin_qpairs": 0, 00:28:26.980 "current_io_qpairs": 0, 00:28:26.980 "pending_bdev_io": 0, 00:28:26.980 "completed_nvme_io": 0, 00:28:26.980 "transports": [ 00:28:26.980 { 00:28:26.980 "trtype": "TCP" 00:28:26.980 } 00:28:26.980 ] 00:28:26.980 } 00:28:26.980 ] 00:28:26.980 }' 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:26.980 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:26.981 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:26.981 06:18:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1086826 00:28:35.080 Initializing NVMe Controllers 00:28:35.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:35.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:35.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:35.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:35.081 Initialization complete. Launching workers. 00:28:35.081 ======================================================== 00:28:35.081 Latency(us) 00:28:35.081 Device Information : IOPS MiB/s Average min max 00:28:35.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7786.78 30.42 8220.30 1599.12 53566.81 00:28:35.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7428.78 29.02 8633.12 1496.28 53707.38 00:28:35.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7033.88 27.48 9100.45 1574.18 52766.62 00:28:35.081 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7401.88 28.91 8648.34 1646.05 52889.24 00:28:35.081 ======================================================== 00:28:35.081 Total : 29651.32 115.83 8639.37 1496.28 53707.38 00:28:35.081 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.081 rmmod nvme_tcp 00:28:35.081 rmmod nvme_fabrics 00:28:35.081 rmmod nvme_keyring 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1086607 ']' 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1086607 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1086607 ']' 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1086607 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086607 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086607' 00:28:35.081 killing process with pid 1086607 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1086607 00:28:35.081 06:18:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1086607 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.081 06:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.984 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:37.243 00:28:37.243 real 0m50.206s 00:28:37.243 user 2m43.963s 00:28:37.243 sys 0m10.182s 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.243 ************************************ 00:28:37.243 END TEST nvmf_perf_adq 00:28:37.243 ************************************ 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:37.243 ************************************ 00:28:37.243 START TEST nvmf_shutdown 00:28:37.243 ************************************ 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:37.243 * Looking for test storage... 00:28:37.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.243 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.502 --rc genhtml_branch_coverage=1 00:28:37.502 --rc genhtml_function_coverage=1 00:28:37.502 --rc genhtml_legend=1 00:28:37.502 --rc geninfo_all_blocks=1 00:28:37.502 --rc geninfo_unexecuted_blocks=1 00:28:37.502 00:28:37.502 ' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.502 --rc genhtml_branch_coverage=1 00:28:37.502 --rc genhtml_function_coverage=1 00:28:37.502 --rc genhtml_legend=1 00:28:37.502 --rc geninfo_all_blocks=1 00:28:37.502 --rc geninfo_unexecuted_blocks=1 00:28:37.502 00:28:37.502 ' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.502 --rc genhtml_branch_coverage=1 00:28:37.502 --rc genhtml_function_coverage=1 00:28:37.502 --rc genhtml_legend=1 00:28:37.502 --rc geninfo_all_blocks=1 00:28:37.502 --rc geninfo_unexecuted_blocks=1 00:28:37.502 00:28:37.502 ' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.502 --rc genhtml_branch_coverage=1 00:28:37.502 --rc genhtml_function_coverage=1 00:28:37.502 --rc genhtml_legend=1 00:28:37.502 --rc geninfo_all_blocks=1 00:28:37.502 --rc geninfo_unexecuted_blocks=1 00:28:37.502 00:28:37.502 ' 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.502 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 ************************************ 00:28:37.503 START TEST nvmf_shutdown_tc1 00:28:37.503 ************************************ 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.503 06:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:44.071 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:44.071 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:44.071 Found net devices under 0000:af:00.0: cvl_0_0 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:44.071 Found net devices under 0000:af:00.1: cvl_0_1 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.071 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:28:44.072 00:28:44.072 --- 10.0.0.2 ping statistics --- 00:28:44.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.072 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:28:44.072 00:28:44.072 --- 10.0.0.1 ping statistics --- 00:28:44.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.072 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1091946 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1091946 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1091946 ']' 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.072 [2024-12-15 06:19:03.550135] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:44.072 [2024-12-15 06:19:03.550185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.072 [2024-12-15 06:19:03.627358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:44.072 [2024-12-15 06:19:03.650732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.072 [2024-12-15 06:19:03.650769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.072 [2024-12-15 06:19:03.650776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.072 [2024-12-15 06:19:03.650783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.072 [2024-12-15 06:19:03.650789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.072 [2024-12-15 06:19:03.652144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.072 [2024-12-15 06:19:03.652253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.072 [2024-12-15 06:19:03.652338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.072 [2024-12-15 06:19:03.652339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.072 [2024-12-15 06:19:03.792217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.072 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.073 06:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.073 Malloc1 00:28:44.073 [2024-12-15 06:19:03.916083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.073 Malloc2 00:28:44.073 Malloc3 00:28:44.073 Malloc4 00:28:44.073 Malloc5 00:28:44.073 Malloc6 00:28:44.073 Malloc7 00:28:44.073 Malloc8 00:28:44.331 Malloc9 00:28:44.331 Malloc10 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1092133 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1092133 /var/tmp/bdevperf.sock 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092133 ']' 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.331 "name": "Nvme$subsystem", 00:28:44.331 "trtype": "$TEST_TRANSPORT", 00:28:44.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.331 "adrfam": "ipv4", 00:28:44.331 "trsvcid": "$NVMF_PORT", 00:28:44.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.331 "hdgst": ${hdgst:-false}, 00:28:44.331 "ddgst": ${ddgst:-false} 00:28:44.331 }, 00:28:44.331 "method": "bdev_nvme_attach_controller" 00:28:44.331 } 00:28:44.331 EOF 00:28:44.331 )") 00:28:44.331 [2024-12-15 06:19:04.392943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:44.331 [2024-12-15 06:19:04.392989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.331 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.331 { 00:28:44.331 "params": { 00:28:44.332 "name": "Nvme$subsystem", 00:28:44.332 "trtype": "$TEST_TRANSPORT", 00:28:44.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "$NVMF_PORT", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.332 "hdgst": ${hdgst:-false}, 00:28:44.332 "ddgst": ${ddgst:-false} 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 } 00:28:44.332 EOF 00:28:44.332 )") 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.332 { 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme$subsystem", 00:28:44.332 "trtype": "$TEST_TRANSPORT", 00:28:44.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "$NVMF_PORT", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.332 "hdgst": ${hdgst:-false}, 00:28:44.332 "ddgst": ${ddgst:-false} 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 } 00:28:44.332 EOF 00:28:44.332 )") 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.332 { 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme$subsystem", 00:28:44.332 "trtype": "$TEST_TRANSPORT", 00:28:44.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "$NVMF_PORT", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.332 "hdgst": ${hdgst:-false}, 00:28:44.332 "ddgst": ${ddgst:-false} 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 } 00:28:44.332 EOF 00:28:44.332 )") 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:44.332 06:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme1", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme2", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme3", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme4", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme5", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme6", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme7", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme8", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme9", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 },{ 00:28:44.332 "params": { 00:28:44.332 "name": "Nvme10", 00:28:44.332 "trtype": "tcp", 00:28:44.332 "traddr": "10.0.0.2", 00:28:44.332 "adrfam": "ipv4", 00:28:44.332 "trsvcid": "4420", 00:28:44.332 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:44.332 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:44.332 "hdgst": false, 00:28:44.332 "ddgst": false 00:28:44.332 }, 00:28:44.332 "method": "bdev_nvme_attach_controller" 00:28:44.332 }' 00:28:44.332 [2024-12-15 06:19:04.467182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.589 [2024-12-15 06:19:04.490029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1092133 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:46.480 06:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:47.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1092133 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1091946 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.413 { 00:28:47.413 "params": { 00:28:47.413 "name": "Nvme$subsystem", 00:28:47.413 "trtype": "$TEST_TRANSPORT", 00:28:47.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.413 "adrfam": "ipv4", 00:28:47.413 "trsvcid": "$NVMF_PORT", 00:28:47.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.413 "hdgst": ${hdgst:-false}, 00:28:47.413 "ddgst": ${ddgst:-false} 00:28:47.413 }, 00:28:47.413 "method": "bdev_nvme_attach_controller" 00:28:47.413 } 00:28:47.413 EOF 00:28:47.413 )") 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.413 { 00:28:47.413 "params": { 00:28:47.413 "name": "Nvme$subsystem", 00:28:47.413 "trtype": "$TEST_TRANSPORT", 00:28:47.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.413 "adrfam": "ipv4", 00:28:47.413 "trsvcid": "$NVMF_PORT", 00:28:47.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.413 "hdgst": ${hdgst:-false}, 00:28:47.413 "ddgst": ${ddgst:-false} 00:28:47.413 }, 00:28:47.413 "method": "bdev_nvme_attach_controller" 00:28:47.413 } 00:28:47.413 EOF 00:28:47.413 )") 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.413 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.413 { 00:28:47.413 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 [2024-12-15 06:19:07.321167] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:47.414 [2024-12-15 06:19:07.321231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092661 ] 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:47.414 { 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme$subsystem", 00:28:47.414 "trtype": "$TEST_TRANSPORT", 00:28:47.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "$NVMF_PORT", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.414 "hdgst": ${hdgst:-false}, 00:28:47.414 "ddgst": ${ddgst:-false} 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 } 00:28:47.414 EOF 00:28:47.414 )") 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:47.414 06:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme1", 00:28:47.414 "trtype": "tcp", 00:28:47.414 "traddr": "10.0.0.2", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "4420", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.414 "hdgst": false, 00:28:47.414 "ddgst": false 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 },{ 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme2", 00:28:47.414 "trtype": "tcp", 00:28:47.414 "traddr": "10.0.0.2", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "4420", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:47.414 "hdgst": false, 00:28:47.414 "ddgst": false 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 },{ 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme3", 00:28:47.414 "trtype": "tcp", 00:28:47.414 "traddr": "10.0.0.2", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "4420", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:47.414 "hdgst": false, 00:28:47.414 "ddgst": false 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 },{ 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme4", 00:28:47.414 "trtype": "tcp", 00:28:47.414 "traddr": "10.0.0.2", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "4420", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:47.414 "hdgst": false, 00:28:47.414 "ddgst": false 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.414 },{ 00:28:47.414 "params": { 00:28:47.414 "name": "Nvme5", 00:28:47.414 "trtype": "tcp", 00:28:47.414 "traddr": "10.0.0.2", 00:28:47.414 "adrfam": "ipv4", 00:28:47.414 "trsvcid": "4420", 00:28:47.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:47.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:47.414 "hdgst": false, 00:28:47.414 "ddgst": false 00:28:47.414 }, 00:28:47.414 "method": "bdev_nvme_attach_controller" 00:28:47.415 },{ 00:28:47.415 "params": { 00:28:47.415 "name": "Nvme6", 00:28:47.415 "trtype": "tcp", 00:28:47.415 "traddr": "10.0.0.2", 00:28:47.415 "adrfam": "ipv4", 00:28:47.415 "trsvcid": "4420", 00:28:47.415 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:47.415 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:47.415 "hdgst": false, 00:28:47.415 "ddgst": false 00:28:47.415 }, 00:28:47.415 "method": "bdev_nvme_attach_controller" 00:28:47.415 },{ 00:28:47.415 "params": { 00:28:47.415 "name": "Nvme7", 00:28:47.415 "trtype": "tcp", 00:28:47.415 "traddr": "10.0.0.2", 00:28:47.415 "adrfam": "ipv4", 00:28:47.415 "trsvcid": "4420", 00:28:47.415 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:47.415 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:47.415 "hdgst": false, 00:28:47.415 "ddgst": false 00:28:47.415 }, 00:28:47.415 "method": "bdev_nvme_attach_controller" 00:28:47.415 },{ 00:28:47.415 "params": { 00:28:47.415 "name": "Nvme8", 00:28:47.415 "trtype": "tcp", 00:28:47.415 "traddr": "10.0.0.2", 00:28:47.415 "adrfam": "ipv4", 00:28:47.415 "trsvcid": "4420", 00:28:47.415 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:47.415 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:47.415 "hdgst": false, 00:28:47.415 "ddgst": false 00:28:47.415 }, 00:28:47.415 "method": "bdev_nvme_attach_controller" 00:28:47.415 },{ 00:28:47.415 "params": { 00:28:47.415 "name": "Nvme9", 00:28:47.415 "trtype": "tcp", 00:28:47.415 "traddr": "10.0.0.2", 00:28:47.415 "adrfam": "ipv4", 00:28:47.415 "trsvcid": "4420", 00:28:47.415 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:47.415 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:47.415 "hdgst": false, 00:28:47.415 "ddgst": false 00:28:47.415 }, 00:28:47.415 "method": "bdev_nvme_attach_controller" 00:28:47.415 },{ 00:28:47.415 "params": { 00:28:47.415 "name": "Nvme10", 00:28:47.415 "trtype": "tcp", 00:28:47.415 "traddr": "10.0.0.2", 00:28:47.415 "adrfam": "ipv4", 00:28:47.415 "trsvcid": "4420", 00:28:47.415 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:47.415 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:47.415 "hdgst": false, 00:28:47.415 "ddgst": false 00:28:47.415 }, 00:28:47.415 "method": "bdev_nvme_attach_controller" 00:28:47.415 }' 00:28:47.415 [2024-12-15 06:19:07.398483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.415 [2024-12-15 06:19:07.421075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.785 Running I/O for 1 seconds... 00:28:49.718 2276.00 IOPS, 142.25 MiB/s 00:28:49.718 Latency(us) 00:28:49.718 [2024-12-15T05:19:09.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.718 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme1n1 : 1.06 241.68 15.10 0.00 0.00 262341.97 16976.94 216705.71 00:28:49.718 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme2n1 : 1.05 244.30 15.27 0.00 0.00 255729.13 16602.45 223696.21 00:28:49.718 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme3n1 : 1.06 319.26 19.95 0.00 0.00 190387.74 7333.79 211712.49 00:28:49.718 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme4n1 : 1.12 286.70 17.92 0.00 0.00 211932.01 13169.62 220700.28 00:28:49.718 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme5n1 : 1.11 288.21 18.01 0.00 0.00 207561.09 15978.30 215707.06 00:28:49.718 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme6n1 : 1.11 292.05 18.25 0.00 0.00 201278.15 5180.46 200727.41 00:28:49.718 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme7n1 : 1.12 285.74 17.86 0.00 0.00 203434.76 15541.39 223696.21 00:28:49.718 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme8n1 : 1.13 288.66 18.04 0.00 0.00 198009.46 3978.97 211712.49 00:28:49.718 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme9n1 : 1.15 282.71 17.67 0.00 0.00 199648.72 2886.70 219701.64 00:28:49.718 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.718 Verification LBA range: start 0x0 length 0x400 00:28:49.718 Nvme10n1 : 1.16 331.98 20.75 0.00 0.00 167884.86 3869.74 237677.23 00:28:49.718 [2024-12-15T05:19:09.858Z] =================================================================================================================== 00:28:49.718 [2024-12-15T05:19:09.858Z] Total : 2861.28 178.83 0.00 0.00 206824.55 2886.70 237677.23 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.976 rmmod nvme_tcp 00:28:49.976 rmmod nvme_fabrics 00:28:49.976 rmmod nvme_keyring 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1091946 ']' 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1091946 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1091946 ']' 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1091946 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.976 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1091946 00:28:50.234 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.234 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.234 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1091946' 00:28:50.234 killing process with pid 1091946 00:28:50.234 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1091946 00:28:50.234 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1091946 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.492 06:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.028 00:28:53.028 real 0m15.094s 00:28:53.028 user 0m33.072s 00:28:53.028 sys 0m5.740s 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.028 ************************************ 00:28:53.028 END TEST nvmf_shutdown_tc1 00:28:53.028 ************************************ 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:53.028 ************************************ 00:28:53.028 START TEST nvmf_shutdown_tc2 00:28:53.028 ************************************ 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:53.028 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:53.028 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:53.028 Found net devices under 0000:af:00.0: cvl_0_0 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:53.028 Found net devices under 0000:af:00.1: cvl_0_1 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.028 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:28:53.029 00:28:53.029 --- 10.0.0.2 ping statistics --- 00:28:53.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.029 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:28:53.029 00:28:53.029 --- 10.0.0.1 ping statistics --- 00:28:53.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.029 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1093637 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1093637 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093637 ']' 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.029 06:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.029 [2024-12-15 06:19:12.980834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:53.029 [2024-12-15 06:19:12.980880] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.029 [2024-12-15 06:19:13.063502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.029 [2024-12-15 06:19:13.085757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.029 [2024-12-15 06:19:13.085794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.029 [2024-12-15 06:19:13.085801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.029 [2024-12-15 06:19:13.085807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.029 [2024-12-15 06:19:13.085812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.029 [2024-12-15 06:19:13.087152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.029 [2024-12-15 06:19:13.087170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.029 [2024-12-15 06:19:13.087267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.029 [2024-12-15 06:19:13.087268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.287 [2024-12-15 06:19:13.226419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.287 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.287 Malloc1 00:28:53.287 [2024-12-15 06:19:13.330291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.287 Malloc2 00:28:53.287 Malloc3 00:28:53.545 Malloc4 00:28:53.545 Malloc5 00:28:53.545 Malloc6 00:28:53.545 Malloc7 00:28:53.545 Malloc8 00:28:53.545 Malloc9 00:28:53.803 Malloc10 00:28:53.803 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1093758 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1093758 /var/tmp/bdevperf.sock 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093758 ']' 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 [2024-12-15 06:19:13.800428] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:53.804 [2024-12-15 06:19:13.800476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093758 ] 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.804 "hdgst": ${hdgst:-false}, 00:28:53.804 "ddgst": ${ddgst:-false} 00:28:53.804 }, 00:28:53.804 "method": "bdev_nvme_attach_controller" 00:28:53.804 } 00:28:53.804 EOF 00:28:53.804 )") 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.804 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.804 { 00:28:53.804 "params": { 00:28:53.804 "name": "Nvme$subsystem", 00:28:53.804 "trtype": "$TEST_TRANSPORT", 00:28:53.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.804 "adrfam": "ipv4", 00:28:53.804 "trsvcid": "$NVMF_PORT", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.805 "hdgst": ${hdgst:-false}, 00:28:53.805 "ddgst": ${ddgst:-false} 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 } 00:28:53.805 EOF 00:28:53.805 )") 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.805 { 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme$subsystem", 00:28:53.805 "trtype": "$TEST_TRANSPORT", 00:28:53.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "$NVMF_PORT", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.805 "hdgst": ${hdgst:-false}, 00:28:53.805 "ddgst": ${ddgst:-false} 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 } 00:28:53.805 EOF 00:28:53.805 )") 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:53.805 06:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme1", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme2", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme3", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme4", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme5", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme6", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme7", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme8", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme9", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 },{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme10", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 }' 00:28:53.805 [2024-12-15 06:19:13.874249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.805 [2024-12-15 06:19:13.896654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.697 Running I/O for 10 seconds... 00:28:55.697 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.697 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:55.697 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:55.697 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.697 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:55.953 06:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:56.210 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:56.210 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.210 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.210 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:56.211 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1093758 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093758 ']' 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093758 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.469 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093758 00:28:56.727 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.727 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.727 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093758' 00:28:56.727 killing process with pid 1093758 00:28:56.727 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093758 00:28:56.727 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093758 00:28:56.727 Received shutdown signal, test time was about 0.914925 seconds 00:28:56.727 00:28:56.727 Latency(us) 00:28:56.727 [2024-12-15T05:19:16.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme1n1 : 0.90 283.74 17.73 0.00 0.00 223211.76 16227.96 208716.56 00:28:56.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme2n1 : 0.90 285.01 17.81 0.00 0.00 218350.20 15666.22 213709.78 00:28:56.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme3n1 : 0.89 288.57 18.04 0.00 0.00 211682.74 24466.77 190740.97 00:28:56.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme4n1 : 0.89 287.54 17.97 0.00 0.00 208497.49 15541.39 210713.84 00:28:56.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme5n1 : 0.91 280.00 17.50 0.00 0.00 210218.67 16976.94 228689.43 00:28:56.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme6n1 : 0.91 282.03 17.63 0.00 0.00 204586.18 27337.87 206719.27 00:28:56.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme7n1 : 0.91 282.27 17.64 0.00 0.00 201008.03 13731.35 213709.78 00:28:56.727 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme8n1 : 0.88 295.09 18.44 0.00 0.00 186404.67 4649.94 213709.78 00:28:56.727 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme9n1 : 0.91 280.82 17.55 0.00 0.00 194394.70 16227.96 217704.35 00:28:56.727 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.727 Verification LBA range: start 0x0 length 0x400 00:28:56.727 Nvme10n1 : 0.88 218.65 13.67 0.00 0.00 242995.20 17850.76 242670.45 00:28:56.727 [2024-12-15T05:19:16.867Z] =================================================================================================================== 00:28:56.727 [2024-12-15T05:19:16.867Z] Total : 2783.72 173.98 0.00 0.00 209255.77 4649.94 242670.45 00:28:56.984 06:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1093637 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.917 rmmod nvme_tcp 00:28:57.917 rmmod nvme_fabrics 00:28:57.917 rmmod nvme_keyring 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1093637 ']' 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1093637 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093637 ']' 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093637 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.917 06:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093637 00:28:57.917 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.917 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.917 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093637' 00:28:57.917 killing process with pid 1093637 00:28:57.917 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093637 00:28:57.917 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093637 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.484 06:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.388 00:29:00.388 real 0m7.825s 00:29:00.388 user 0m24.165s 00:29:00.388 sys 0m1.367s 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.388 ************************************ 00:29:00.388 END TEST nvmf_shutdown_tc2 00:29:00.388 ************************************ 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.388 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.648 ************************************ 00:29:00.648 START TEST nvmf_shutdown_tc3 00:29:00.648 ************************************ 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.648 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.648 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.648 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.648 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.648 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.649 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:29:00.908 00:29:00.908 --- 10.0.0.2 ping statistics --- 00:29:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.908 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:00.908 00:29:00.908 --- 10.0.0.1 ping statistics --- 00:29:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.908 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1095002 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1095002 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095002 ']' 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.908 06:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.908 [2024-12-15 06:19:20.904655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:00.908 [2024-12-15 06:19:20.904700] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.908 [2024-12-15 06:19:20.985599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.908 [2024-12-15 06:19:21.007913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.908 [2024-12-15 06:19:21.007949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.908 [2024-12-15 06:19:21.007956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.908 [2024-12-15 06:19:21.007963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.908 [2024-12-15 06:19:21.007968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.908 [2024-12-15 06:19:21.009423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.908 [2024-12-15 06:19:21.009532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.908 [2024-12-15 06:19:21.009638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.908 [2024-12-15 06:19:21.009640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.167 [2024-12-15 06:19:21.140579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.167 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.167 Malloc1 00:29:01.167 [2024-12-15 06:19:21.248418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.167 Malloc2 00:29:01.426 Malloc3 00:29:01.426 Malloc4 00:29:01.426 Malloc5 00:29:01.426 Malloc6 00:29:01.426 Malloc7 00:29:01.426 Malloc8 00:29:01.685 Malloc9 00:29:01.685 Malloc10 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1095267 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1095267 /var/tmp/bdevperf.sock 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095267 ']' 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.685 { 00:29:01.685 "params": { 00:29:01.685 "name": "Nvme$subsystem", 00:29:01.685 "trtype": "$TEST_TRANSPORT", 00:29:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.685 "adrfam": "ipv4", 00:29:01.685 "trsvcid": "$NVMF_PORT", 00:29:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.685 "hdgst": ${hdgst:-false}, 00:29:01.685 "ddgst": ${ddgst:-false} 00:29:01.685 }, 00:29:01.685 "method": "bdev_nvme_attach_controller" 00:29:01.685 } 00:29:01.685 EOF 00:29:01.685 )") 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.685 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.685 { 00:29:01.685 "params": { 00:29:01.685 "name": "Nvme$subsystem", 00:29:01.685 "trtype": "$TEST_TRANSPORT", 00:29:01.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.685 "adrfam": "ipv4", 00:29:01.685 "trsvcid": "$NVMF_PORT", 00:29:01.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.685 "hdgst": ${hdgst:-false}, 00:29:01.685 "ddgst": ${ddgst:-false} 00:29:01.685 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 [2024-12-15 06:19:21.718474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:01.686 [2024-12-15 06:19:21.718522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095267 ] 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.686 { 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme$subsystem", 00:29:01.686 "trtype": "$TEST_TRANSPORT", 00:29:01.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "$NVMF_PORT", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.686 "hdgst": ${hdgst:-false}, 00:29:01.686 "ddgst": ${ddgst:-false} 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 } 00:29:01.686 EOF 00:29:01.686 )") 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:01.686 06:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme1", 00:29:01.686 "trtype": "tcp", 00:29:01.686 "traddr": "10.0.0.2", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "4420", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.686 "hdgst": false, 00:29:01.686 "ddgst": false 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 },{ 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme2", 00:29:01.686 "trtype": "tcp", 00:29:01.686 "traddr": "10.0.0.2", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "4420", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.686 "hdgst": false, 00:29:01.686 "ddgst": false 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 },{ 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme3", 00:29:01.686 "trtype": "tcp", 00:29:01.686 "traddr": "10.0.0.2", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "4420", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.686 "hdgst": false, 00:29:01.686 "ddgst": false 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 },{ 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme4", 00:29:01.686 "trtype": "tcp", 00:29:01.686 "traddr": "10.0.0.2", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "4420", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.686 "hdgst": false, 00:29:01.686 "ddgst": false 00:29:01.686 }, 00:29:01.686 "method": "bdev_nvme_attach_controller" 00:29:01.686 },{ 00:29:01.686 "params": { 00:29:01.686 "name": "Nvme5", 00:29:01.686 "trtype": "tcp", 00:29:01.686 "traddr": "10.0.0.2", 00:29:01.686 "adrfam": "ipv4", 00:29:01.686 "trsvcid": "4420", 00:29:01.686 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.686 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 },{ 00:29:01.687 "params": { 00:29:01.687 "name": "Nvme6", 00:29:01.687 "trtype": "tcp", 00:29:01.687 "traddr": "10.0.0.2", 00:29:01.687 "adrfam": "ipv4", 00:29:01.687 "trsvcid": "4420", 00:29:01.687 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.687 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 },{ 00:29:01.687 "params": { 00:29:01.687 "name": "Nvme7", 00:29:01.687 "trtype": "tcp", 00:29:01.687 "traddr": "10.0.0.2", 00:29:01.687 "adrfam": "ipv4", 00:29:01.687 "trsvcid": "4420", 00:29:01.687 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.687 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 },{ 00:29:01.687 "params": { 00:29:01.687 "name": "Nvme8", 00:29:01.687 "trtype": "tcp", 00:29:01.687 "traddr": "10.0.0.2", 00:29:01.687 "adrfam": "ipv4", 00:29:01.687 "trsvcid": "4420", 00:29:01.687 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.687 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 },{ 00:29:01.687 "params": { 00:29:01.687 "name": "Nvme9", 00:29:01.687 "trtype": "tcp", 00:29:01.687 "traddr": "10.0.0.2", 00:29:01.687 "adrfam": "ipv4", 00:29:01.687 "trsvcid": "4420", 00:29:01.687 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.687 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 },{ 00:29:01.687 "params": { 00:29:01.687 "name": "Nvme10", 00:29:01.687 "trtype": "tcp", 00:29:01.687 "traddr": "10.0.0.2", 00:29:01.687 "adrfam": "ipv4", 00:29:01.687 "trsvcid": "4420", 00:29:01.687 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.687 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.687 "hdgst": false, 00:29:01.687 "ddgst": false 00:29:01.687 }, 00:29:01.687 "method": "bdev_nvme_attach_controller" 00:29:01.687 }' 00:29:01.687 [2024-12-15 06:19:21.794581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.687 [2024-12-15 06:19:21.816873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.060 Running I/O for 10 seconds... 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1095002 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095002 ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095002 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1095002 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1095002' 00:29:03.636 killing process with pid 1095002 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1095002 00:29:03.636 06:19:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1095002 00:29:03.636 [2024-12-15 06:19:23.722445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.636 [2024-12-15 06:19:23.722490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.636 [2024-12-15 06:19:23.722498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.636 [2024-12-15 06:19:23.722505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.636 [2024-12-15 06:19:23.722512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.636 [2024-12-15 06:19:23.722518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.722873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52f00 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.637 [2024-12-15 06:19:23.724262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.724472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc75a0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.638 [2024-12-15 06:19:23.726815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.726878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d538c0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.727997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53db0 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.639 [2024-12-15 06:19:23.728964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.728970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.728976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.728982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.728989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.728999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.729306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54280 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.730997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.640 [2024-12-15 06:19:23.731064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.731310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54ad0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.641 [2024-12-15 06:19:23.732371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.732499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54fc0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with [2024-12-15 06:19:23.733268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:29:03.642 id:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-15 06:19:23.733305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:03.642 the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with [2024-12-15 06:19:23.733324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3610 is same the state(6) to be set 00:29:03.642 with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with [2024-12-15 06:19:23.733364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:29:03.642 id:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.642 [2024-12-15 06:19:23.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.642 [2024-12-15 06:19:23.733431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f5ac0 is same with the state(6) to be set 00:29:03.642 [2024-12-15 06:19:23.733455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbea0 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3400 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d630 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2360 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198140 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1197cd0 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.733942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.733985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.643 [2024-12-15 06:19:23.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1194ca0 is same with the state(6) to be set 00:29:03.643 [2024-12-15 06:19:23.734398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.643 [2024-12-15 06:19:23.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.643 [2024-12-15 06:19:23.734563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.734984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.734990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.644 [2024-12-15 06:19:23.735181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.644 [2024-12-15 06:19:23.735188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.645 [2024-12-15 06:19:23.735648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.645 [2024-12-15 06:19:23.735981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.645 [2024-12-15 06:19:23.735989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.736181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.736187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.740639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.740752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d55490 is same with the state(6) to be set 00:29:03.646 [2024-12-15 06:19:23.748381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.646 [2024-12-15 06:19:23.748678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.646 [2024-12-15 06:19:23.748686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.748857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204bb20 is same with the state(6) to be set 00:29:03.647 [2024-12-15 06:19:23.749155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3610 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.647 [2024-12-15 06:19:23.749206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.749216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.647 [2024-12-15 06:19:23.749223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.749231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.647 [2024-12-15 06:19:23.749238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.749246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.647 [2024-12-15 06:19:23.749255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.749264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16059b0 is same with the state(6) to be set 00:29:03.647 [2024-12-15 06:19:23.749280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f5ac0 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bbea0 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c3400 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d630 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c2360 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1198140 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1197cd0 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.749375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1194ca0 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.751644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:03.647 [2024-12-15 06:19:23.752119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:03.647 [2024-12-15 06:19:23.752267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-12-15 06:19:23.752290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1197cd0 with addr=10.0.0.2, port=4420 00:29:03.647 [2024-12-15 06:19:23.752302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1197cd0 is same with the state(6) to be set 00:29:03.647 [2024-12-15 06:19:23.753230] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753295] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753393] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.647 [2024-12-15 06:19:23.753590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c3400 with addr=10.0.0.2, port=4420 00:29:03.647 [2024-12-15 06:19:23.753602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3400 is same with the state(6) to be set 00:29:03.647 [2024-12-15 06:19:23.753617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1197cd0 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.753678] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753749] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753803] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753856] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753941] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:03.647 [2024-12-15 06:19:23.753972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c3400 (9): Bad file descriptor 00:29:03.647 [2024-12-15 06:19:23.753990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:03.647 [2024-12-15 06:19:23.754010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:03.647 [2024-12-15 06:19:23.754022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:03.647 [2024-12-15 06:19:23.754039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:03.647 [2024-12-15 06:19:23.754066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.647 [2024-12-15 06:19:23.754317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.647 [2024-12-15 06:19:23.754327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.754975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.754984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.648 [2024-12-15 06:19:23.755219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.648 [2024-12-15 06:19:23.755230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.755463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.755473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139bf30 is same with the state(6) to be set 00:29:03.649 [2024-12-15 06:19:23.755637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:03.649 [2024-12-15 06:19:23.755651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:03.649 [2024-12-15 06:19:23.755661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:03.649 [2024-12-15 06:19:23.755671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:03.649 [2024-12-15 06:19:23.756934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:03.649 [2024-12-15 06:19:23.757278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.649 [2024-12-15 06:19:23.757300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1198140 with addr=10.0.0.2, port=4420 00:29:03.649 [2024-12-15 06:19:23.757312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198140 is same with the state(6) to be set 00:29:03.649 [2024-12-15 06:19:23.757651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1198140 (9): Bad file descriptor 00:29:03.649 [2024-12-15 06:19:23.757718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:03.649 [2024-12-15 06:19:23.757731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:03.649 [2024-12-15 06:19:23.757741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.649 [2024-12-15 06:19:23.757750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:03.649 [2024-12-15 06:19:23.759159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16059b0 (9): Bad file descriptor 00:29:03.649 [2024-12-15 06:19:23.759310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.649 [2024-12-15 06:19:23.759691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.649 [2024-12-15 06:19:23.759701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.759978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.759989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.650 [2024-12-15 06:19:23.760607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.650 [2024-12-15 06:19:23.760619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.760650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.760678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.760706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.760743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.760772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.760785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585e30 is same with the state(6) to be set 00:29:03.651 [2024-12-15 06:19:23.762079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.651 [2024-12-15 06:19:23.762650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.651 [2024-12-15 06:19:23.762657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.762970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.762984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.763186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.763194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1593050 is same with the state(6) to be set 00:29:03.652 [2024-12-15 06:19:23.764200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.652 [2024-12-15 06:19:23.764379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.652 [2024-12-15 06:19:23.764386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.653 [2024-12-15 06:19:23.764678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.653 [2024-12-15 06:19:23.764689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.917 [2024-12-15 06:19:23.764701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.764965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.764984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.765488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.765496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe2c0 is same with the state(6) to be set 00:29:03.918 [2024-12-15 06:19:23.766592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.918 [2024-12-15 06:19:23.766771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.918 [2024-12-15 06:19:23.766779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.766977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.766986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.919 [2024-12-15 06:19:23.767446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.919 [2024-12-15 06:19:23.767453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.767670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.767678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2299500 is same with the state(6) to be set 00:29:03.920 [2024-12-15 06:19:23.768665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.768981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.768996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.920 [2024-12-15 06:19:23.769104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.920 [2024-12-15 06:19:23.769112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.769709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.769717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e6f50 is same with the state(6) to be set 00:29:03.921 [2024-12-15 06:19:23.770710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.770723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.770735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.921 [2024-12-15 06:19:23.770748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.921 [2024-12-15 06:19:23.770757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.770986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.770999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.922 [2024-12-15 06:19:23.771284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.922 [2024-12-15 06:19:23.771292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.771743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.771752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b780 is same with the state(6) to be set 00:29:03.923 [2024-12-15 06:19:23.772706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.772725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.772737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.772748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.772826] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:03.923 [2024-12-15 06:19:23.772839] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:03.923 [2024-12-15 06:19:23.772912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.772925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:03.923 [2024-12-15 06:19:23.773160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.923 [2024-12-15 06:19:23.773176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118d630 with addr=10.0.0.2, port=4420 00:29:03.923 [2024-12-15 06:19:23.773186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d630 is same with the state(6) to be set 00:29:03.923 [2024-12-15 06:19:23.773343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.923 [2024-12-15 06:19:23.773354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1194ca0 with addr=10.0.0.2, port=4420 00:29:03.923 [2024-12-15 06:19:23.773363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1194ca0 is same with the state(6) to be set 00:29:03.923 [2024-12-15 06:19:23.773560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.923 [2024-12-15 06:19:23.773573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c2360 with addr=10.0.0.2, port=4420 00:29:03.923 [2024-12-15 06:19:23.773582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2360 is same with the state(6) to be set 00:29:03.923 [2024-12-15 06:19:23.773818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.923 [2024-12-15 06:19:23.773830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a3610 with addr=10.0.0.2, port=4420 00:29:03.923 [2024-12-15 06:19:23.773839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3610 is same with the state(6) to be set 00:29:03.923 [2024-12-15 06:19:23.775028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.775044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.923 [2024-12-15 06:19:23.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.923 [2024-12-15 06:19:23.775062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.924 [2024-12-15 06:19:23.775719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.924 [2024-12-15 06:19:23.775726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.775980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.775988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.776000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.776008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.776015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.776023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.776031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.776039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.776047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.776055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.925 [2024-12-15 06:19:23.776062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.925 [2024-12-15 06:19:23.776070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141a440 is same with the state(6) to be set 00:29:03.925 [2024-12-15 06:19:23.777256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:03.925 [2024-12-15 06:19:23.777278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:03.925 [2024-12-15 06:19:23.777288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:03.925 task offset: 21760 on job bdev=Nvme2n1 fails 00:29:03.925 00:29:03.925 Latency(us) 00:29:03.925 [2024-12-15T05:19:24.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.925 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme1n1 ended in about 0.62 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme1n1 : 0.62 207.62 12.98 103.81 0.00 202306.64 16352.79 207717.91 00:29:03.925 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme2n1 ended in about 0.61 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme2n1 : 0.61 209.77 13.11 104.88 0.00 194990.24 15291.73 212711.13 00:29:03.925 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme3n1 ended in about 0.62 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme3n1 : 0.62 205.91 12.87 102.95 0.00 193655.14 15853.47 204721.98 00:29:03.925 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme4n1 ended in about 0.62 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme4n1 : 0.62 205.15 12.82 102.58 0.00 189264.46 16227.96 209715.20 00:29:03.925 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme5n1 ended in about 0.63 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme5n1 : 0.63 111.78 6.99 92.62 0.00 276678.46 30957.96 223696.21 00:29:03.925 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme6n1 ended in about 0.61 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme6n1 : 0.61 209.43 13.09 104.71 0.00 174613.46 17601.10 212711.13 00:29:03.925 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme7n1 ended in about 0.63 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme7n1 : 0.63 211.65 13.23 101.85 0.00 170738.94 18100.42 207717.91 00:29:03.925 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme8n1 ended in about 0.63 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme8n1 : 0.63 203.04 12.69 101.52 0.00 170665.69 26464.06 202724.69 00:29:03.925 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme9n1 ended in about 0.64 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme9n1 : 0.64 100.51 6.28 100.51 0.00 251581.68 20347.37 231685.36 00:29:03.925 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.925 Job: Nvme10n1 ended in about 0.63 seconds with error 00:29:03.925 Verification LBA range: start 0x0 length 0x400 00:29:03.925 Nvme10n1 : 0.63 101.19 6.32 101.19 0.00 241689.84 24716.43 234681.30 00:29:03.925 [2024-12-15T05:19:24.065Z] =================================================================================================================== 00:29:03.925 [2024-12-15T05:19:24.065Z] Total : 1766.05 110.38 1016.63 0.00 200971.92 15291.73 234681.30 00:29:03.925 [2024-12-15 06:19:23.807821] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:03.925 [2024-12-15 06:19:23.807876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:03.925 [2024-12-15 06:19:23.808214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.925 [2024-12-15 06:19:23.808236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bbea0 with addr=10.0.0.2, port=4420 00:29:03.925 [2024-12-15 06:19:23.808250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbea0 is same with the state(6) to be set 00:29:03.925 [2024-12-15 06:19:23.808393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.925 [2024-12-15 06:19:23.808405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f5ac0 with addr=10.0.0.2, port=4420 00:29:03.925 [2024-12-15 06:19:23.808421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f5ac0 is same with the state(6) to be set 00:29:03.925 [2024-12-15 06:19:23.808435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d630 (9): Bad file descriptor 00:29:03.925 [2024-12-15 06:19:23.808449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1194ca0 (9): Bad file descriptor 00:29:03.925 [2024-12-15 06:19:23.808458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c2360 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.808468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3610 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.808766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.808783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1197cd0 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.808791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1197cd0 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.808883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.808895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c3400 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.808904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c3400 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.809042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.809055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1198140 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.809063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1198140 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.809190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.809202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16059b0 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.809210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16059b0 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.809220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bbea0 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f5ac0 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.809274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.809302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.809332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.809397] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:03.926 [2024-12-15 06:19:23.809410] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:03.926 [2024-12-15 06:19:23.809745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1197cd0 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c3400 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1198140 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16059b0 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.809789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.809817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.809823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.809830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.809837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.810095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:03.926 [2024-12-15 06:19:23.810112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:03.926 [2024-12-15 06:19:23.810121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:03.926 [2024-12-15 06:19:23.810130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:03.926 [2024-12-15 06:19:23.810160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.810168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.810175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.810182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.810193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.810199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.810207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.810213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.810219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.810226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.810233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.810239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.810246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:03.926 [2024-12-15 06:19:23.810253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:03.926 [2024-12-15 06:19:23.810260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:03.926 [2024-12-15 06:19:23.810266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:03.926 [2024-12-15 06:19:23.810523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.810539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a3610 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.810548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3610 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.810703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.810715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c2360 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.810723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2360 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.810789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.810801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1194ca0 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.810809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1194ca0 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.810978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.926 [2024-12-15 06:19:23.810990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118d630 with addr=10.0.0.2, port=4420 00:29:03.926 [2024-12-15 06:19:23.811016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118d630 is same with the state(6) to be set 00:29:03.926 [2024-12-15 06:19:23.811049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a3610 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.811061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c2360 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.811070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1194ca0 (9): Bad file descriptor 00:29:03.926 [2024-12-15 06:19:23.811079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118d630 (9): Bad file descriptor 00:29:03.927 [2024-12-15 06:19:23.811106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:03.927 [2024-12-15 06:19:23.811115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:03.927 [2024-12-15 06:19:23.811125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:03.927 [2024-12-15 06:19:23.811133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:03.927 [2024-12-15 06:19:23.811140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:03.927 [2024-12-15 06:19:23.811146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:03.927 [2024-12-15 06:19:23.811154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:03.927 [2024-12-15 06:19:23.811161] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:03.927 [2024-12-15 06:19:23.811169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:03.927 [2024-12-15 06:19:23.811176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:03.927 [2024-12-15 06:19:23.811182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:03.927 [2024-12-15 06:19:23.811188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:03.927 [2024-12-15 06:19:23.811196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:03.927 [2024-12-15 06:19:23.811202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:03.927 [2024-12-15 06:19:23.811209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:03.927 [2024-12-15 06:19:23.811216] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:04.186 06:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1095267 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1095267 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.124 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1095267 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.125 rmmod nvme_tcp 00:29:05.125 rmmod nvme_fabrics 00:29:05.125 rmmod nvme_keyring 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1095002 ']' 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1095002 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095002 ']' 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095002 00:29:05.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1095002) - No such process 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1095002 is not found' 00:29:05.125 Process with pid 1095002 is not found 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.125 06:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.174 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.174 00:29:07.174 real 0m6.746s 00:29:07.174 user 0m14.581s 00:29:07.174 sys 0m1.170s 00:29:07.174 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.174 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.174 ************************************ 00:29:07.174 END TEST nvmf_shutdown_tc3 00:29:07.174 ************************************ 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.476 ************************************ 00:29:07.476 START TEST nvmf_shutdown_tc4 00:29:07.476 ************************************ 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.476 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:07.477 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:07.477 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:07.477 Found net devices under 0000:af:00.0: cvl_0_0 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:07.477 Found net devices under 0000:af:00.1: cvl_0_1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.477 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:29:07.477 00:29:07.477 --- 10.0.0.2 ping statistics --- 00:29:07.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.477 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:07.760 00:29:07.760 --- 10.0.0.1 ping statistics --- 00:29:07.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.760 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1096289 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1096289 00:29:07.760 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1096289 ']' 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.761 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.761 [2024-12-15 06:19:27.717014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:07.761 [2024-12-15 06:19:27.717056] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.761 [2024-12-15 06:19:27.794090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.761 [2024-12-15 06:19:27.816352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.761 [2024-12-15 06:19:27.816389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.761 [2024-12-15 06:19:27.816396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.761 [2024-12-15 06:19:27.816402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.761 [2024-12-15 06:19:27.816407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.761 [2024-12-15 06:19:27.817935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.761 [2024-12-15 06:19:27.818043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.761 [2024-12-15 06:19:27.818150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.761 [2024-12-15 06:19:27.818152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:08.021 [2024-12-15 06:19:27.949105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.021 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:08.021 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:08.021 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.021 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:08.021 Malloc1 00:29:08.021 [2024-12-15 06:19:28.057334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.021 Malloc2 00:29:08.021 Malloc3 00:29:08.280 Malloc4 00:29:08.280 Malloc5 00:29:08.280 Malloc6 00:29:08.280 Malloc7 00:29:08.280 Malloc8 00:29:08.280 Malloc9 00:29:08.538 Malloc10 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1096442 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:08.538 06:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:08.538 [2024-12-15 06:19:28.565911] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1096289 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096289 ']' 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096289 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096289 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096289' 00:29:13.813 killing process with pid 1096289 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1096289 00:29:13.813 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1096289 00:29:13.813 [2024-12-15 06:19:33.563199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.813 [2024-12-15 06:19:33.563251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.563259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.563267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.563273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.563286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8df00 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91920 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91920 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91920 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91920 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91920 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.566960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91df0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.567883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c922c0 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.568937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91450 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.570709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8a850 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c590 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 [2024-12-15 06:19:33.573490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8b700 is same with the state(6) to be set 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 [2024-12-15 06:19:33.576697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 starting I/O failed: -6 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.814 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.577661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.815 [2024-12-15 06:19:33.577711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.577732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.577740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.577751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.577757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.577763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8ce00 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 [2024-12-15 06:19:33.578068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 [2024-12-15 06:19:33.578076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 [2024-12-15 06:19:33.578096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d2f0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with Write completed with error (sct=0, sc=8) 00:29:13.815 the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 [2024-12-15 06:19:33.578410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 [2024-12-15 06:19:33.578430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d7e0 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.815 [2024-12-15 06:19:33.578726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with Write completed with error (sct=0, sc=8) 00:29:13.815 the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with the state(6) to be set 00:29:13.815 [2024-12-15 06:19:33.578767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with Write completed with error (sct=0, sc=8) 00:29:13.815 the state(6) to be set 00:29:13.815 starting I/O failed: -6 00:29:13.815 [2024-12-15 06:19:33.578774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8c930 is same with the state(6) to be set 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.815 starting I/O failed: -6 00:29:13.815 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 [2024-12-15 06:19:33.580190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.816 NVMe io qpair process completion error 00:29:13.816 [2024-12-15 06:19:33.580239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 [2024-12-15 06:19:33.580289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7fe20 is same with the state(6) to be set 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 [2024-12-15 06:19:33.581146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 Write completed with error (sct=0, sc=8) 00:29:13.816 starting I/O failed: -6 00:29:13.817 [2024-12-15 06:19:33.582016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 [2024-12-15 06:19:33.582988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 [2024-12-15 06:19:33.584496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.817 NVMe io qpair process completion error 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 starting I/O failed: -6 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.817 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 [2024-12-15 06:19:33.585569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 [2024-12-15 06:19:33.586477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.818 [2024-12-15 06:19:33.587475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.818 Write completed with error (sct=0, sc=8) 00:29:13.818 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 [2024-12-15 06:19:33.589635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.819 NVMe io qpair process completion error 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 [2024-12-15 06:19:33.590735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 starting I/O failed: -6 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.819 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 [2024-12-15 06:19:33.591631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 [2024-12-15 06:19:33.592665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 [2024-12-15 06:19:33.597976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.820 NVMe io qpair process completion error 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 starting I/O failed: -6 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.820 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 [2024-12-15 06:19:33.598809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 [2024-12-15 06:19:33.599780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 [2024-12-15 06:19:33.600797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.821 Write completed with error (sct=0, sc=8) 00:29:13.821 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 [2024-12-15 06:19:33.605024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.822 NVMe io qpair process completion error 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 [2024-12-15 06:19:33.606078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 [2024-12-15 06:19:33.606989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 starting I/O failed: -6 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.822 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 [2024-12-15 06:19:33.607974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 [2024-12-15 06:19:33.609791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.823 NVMe io qpair process completion error 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.823 starting I/O failed: -6 00:29:13.823 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 [2024-12-15 06:19:33.610921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 [2024-12-15 06:19:33.611873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 [2024-12-15 06:19:33.612907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.824 Write completed with error (sct=0, sc=8) 00:29:13.824 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 [2024-12-15 06:19:33.614642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.825 NVMe io qpair process completion error 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 starting I/O failed: -6 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.827 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 [2024-12-15 06:19:33.625899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 [2024-12-15 06:19:33.626801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 [2024-12-15 06:19:33.627790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.828 starting I/O failed: -6 00:29:13.828 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 Write completed with error (sct=0, sc=8) 00:29:13.829 starting I/O failed: -6 00:29:13.829 [2024-12-15 06:19:33.630196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.829 NVMe io qpair process completion error 00:29:13.829 Initializing NVMe Controllers 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:13.829 Controller IO queue size 128, less than required. 00:29:13.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:13.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:13.829 Initialization complete. Launching workers. 00:29:13.829 ======================================================== 00:29:13.829 Latency(us) 00:29:13.829 Device Information : IOPS MiB/s Average min max 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2191.52 94.17 58402.77 830.04 112949.27 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2178.18 93.59 58779.66 859.65 126950.40 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2208.63 94.90 57996.73 696.20 118430.85 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.78 94.44 57643.32 704.91 104218.85 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2212.80 95.08 57261.24 728.81 102435.09 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2210.92 95.00 57321.63 853.02 99142.59 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2166.29 93.08 58520.66 864.49 98376.33 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2183.18 93.81 58123.63 576.76 97545.20 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2210.08 94.96 57459.61 677.87 107796.95 00:29:13.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2196.95 94.40 57818.95 458.50 111027.93 00:29:13.829 ======================================================== 00:29:13.829 Total : 21956.33 943.44 57929.91 458.50 126950.40 00:29:13.829 00:29:13.829 [2024-12-15 06:19:33.633173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632190 is same with the state(6) to be set 00:29:13.829 [2024-12-15 06:19:33.633222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632370 is same with the state(6) to be set 00:29:13.829 [2024-12-15 06:19:33.633253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633cc0 is same with the state(6) to be set 00:29:13.829 [2024-12-15 06:19:33.633282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637b30 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632550 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634320 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631fb0 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634650 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1632880 is same with the state(6) to be set 00:29:13.830 [2024-12-15 06:19:33.633456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1633ff0 is same with the state(6) to be set 00:29:13.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:13.830 06:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1096442 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1096442 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1096442 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.208 06:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.208 rmmod nvme_tcp 00:29:15.208 rmmod nvme_fabrics 00:29:15.208 rmmod nvme_keyring 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1096289 ']' 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1096289 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096289 ']' 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096289 00:29:15.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1096289) - No such process 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1096289 is not found' 00:29:15.208 Process with pid 1096289 is not found 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.208 06:19:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.113 00:29:17.113 real 0m9.749s 00:29:17.113 user 0m24.913s 00:29:17.113 sys 0m5.149s 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.113 ************************************ 00:29:17.113 END TEST nvmf_shutdown_tc4 00:29:17.113 ************************************ 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:17.113 00:29:17.113 real 0m39.928s 00:29:17.113 user 1m36.957s 00:29:17.113 sys 0m13.750s 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:17.113 ************************************ 00:29:17.113 END TEST nvmf_shutdown 00:29:17.113 ************************************ 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:17.113 ************************************ 00:29:17.113 START TEST nvmf_nsid 00:29:17.113 ************************************ 00:29:17.113 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:17.373 * Looking for test storage... 00:29:17.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:17.373 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.374 --rc genhtml_branch_coverage=1 00:29:17.374 --rc genhtml_function_coverage=1 00:29:17.374 --rc genhtml_legend=1 00:29:17.374 --rc geninfo_all_blocks=1 00:29:17.374 --rc geninfo_unexecuted_blocks=1 00:29:17.374 00:29:17.374 ' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.374 --rc genhtml_branch_coverage=1 00:29:17.374 --rc genhtml_function_coverage=1 00:29:17.374 --rc genhtml_legend=1 00:29:17.374 --rc geninfo_all_blocks=1 00:29:17.374 --rc geninfo_unexecuted_blocks=1 00:29:17.374 00:29:17.374 ' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.374 --rc genhtml_branch_coverage=1 00:29:17.374 --rc genhtml_function_coverage=1 00:29:17.374 --rc genhtml_legend=1 00:29:17.374 --rc geninfo_all_blocks=1 00:29:17.374 --rc geninfo_unexecuted_blocks=1 00:29:17.374 00:29:17.374 ' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.374 --rc genhtml_branch_coverage=1 00:29:17.374 --rc genhtml_function_coverage=1 00:29:17.374 --rc genhtml_legend=1 00:29:17.374 --rc geninfo_all_blocks=1 00:29:17.374 --rc geninfo_unexecuted_blocks=1 00:29:17.374 00:29:17.374 ' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.374 06:19:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:23.945 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:23.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:23.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:23.946 Found net devices under 0000:af:00.0: cvl_0_0 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:23.946 Found net devices under 0000:af:00.1: cvl_0_1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:29:23.946 00:29:23.946 --- 10.0.0.2 ping statistics --- 00:29:23.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.946 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:23.946 00:29:23.946 --- 10.0.0.1 ping statistics --- 00:29:23.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.946 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1100932 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1100932 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100932 ']' 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.946 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 [2024-12-15 06:19:43.406392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:23.947 [2024-12-15 06:19:43.406437] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.947 [2024-12-15 06:19:43.483813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.947 [2024-12-15 06:19:43.505188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.947 [2024-12-15 06:19:43.505225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.947 [2024-12-15 06:19:43.505232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.947 [2024-12-15 06:19:43.505239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.947 [2024-12-15 06:19:43.505244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.947 [2024-12-15 06:19:43.505733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1100953 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b04c5de8-568e-4a75-a656-8f08c8cac323 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b47aa10d-48df-4b8a-8551-9c2334c03e0b 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=dabdbf8f-55c3-4176-8707-1bb3d83d9f08 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 null0 00:29:23.947 null1 00:29:23.947 [2024-12-15 06:19:43.680301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:23.947 [2024-12-15 06:19:43.680345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100953 ] 00:29:23.947 null2 00:29:23.947 [2024-12-15 06:19:43.687882] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.947 [2024-12-15 06:19:43.712077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1100953 /var/tmp/tgt2.sock 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1100953 ']' 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:23.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 [2024-12-15 06:19:43.756360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.947 [2024-12-15 06:19:43.778668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:23.947 06:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:24.206 [2024-12-15 06:19:44.297711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.206 [2024-12-15 06:19:44.313794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:24.206 nvme0n1 nvme0n2 00:29:24.206 nvme1n1 00:29:24.465 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:24.465 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:24.465 06:19:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:25.401 06:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b04c5de8-568e-4a75-a656-8f08c8cac323 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:26.337 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b04c5de8568e4a75a6568f08c8cac323 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B04C5DE8568E4A75A6568F08C8CAC323 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B04C5DE8568E4A75A6568F08C8CAC323 == \B\0\4\C\5\D\E\8\5\6\8\E\4\A\7\5\A\6\5\6\8\F\0\8\C\8\C\A\C\3\2\3 ]] 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b47aa10d-48df-4b8a-8551-9c2334c03e0b 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b47aa10d48df4b8a85519c2334c03e0b 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B47AA10D48DF4B8A85519C2334C03E0B 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B47AA10D48DF4B8A85519C2334C03E0B == \B\4\7\A\A\1\0\D\4\8\D\F\4\B\8\A\8\5\5\1\9\C\2\3\3\4\C\0\3\E\0\B ]] 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid dabdbf8f-55c3-4176-8707-1bb3d83d9f08 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dabdbf8f55c3417687071bb3d83d9f08 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DABDBF8F55C3417687071BB3D83D9F08 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DABDBF8F55C3417687071BB3D83D9F08 == \D\A\B\D\B\F\8\F\5\5\C\3\4\1\7\6\8\7\0\7\1\B\B\3\D\8\3\D\9\F\0\8 ]] 00:29:26.596 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1100953 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100953 ']' 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100953 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100953 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100953' 00:29:26.855 killing process with pid 1100953 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100953 00:29:26.855 06:19:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100953 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.114 rmmod nvme_tcp 00:29:27.114 rmmod nvme_fabrics 00:29:27.114 rmmod nvme_keyring 00:29:27.114 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1100932 ']' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1100932 ']' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1100932' 00:29:27.373 killing process with pid 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1100932 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.373 06:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.907 00:29:29.907 real 0m12.314s 00:29:29.907 user 0m9.509s 00:29:29.907 sys 0m5.504s 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:29.907 ************************************ 00:29:29.907 END TEST nvmf_nsid 00:29:29.907 ************************************ 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:29.907 00:29:29.907 real 18m30.134s 00:29:29.907 user 48m57.234s 00:29:29.907 sys 4m45.268s 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.907 06:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:29.907 ************************************ 00:29:29.907 END TEST nvmf_target_extra 00:29:29.907 ************************************ 00:29:29.908 06:19:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:29.908 06:19:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.908 06:19:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.908 06:19:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.908 ************************************ 00:29:29.908 START TEST nvmf_host 00:29:29.908 ************************************ 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:29.908 * Looking for test storage... 00:29:29.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.908 --rc genhtml_branch_coverage=1 00:29:29.908 --rc genhtml_function_coverage=1 00:29:29.908 --rc genhtml_legend=1 00:29:29.908 --rc geninfo_all_blocks=1 00:29:29.908 --rc geninfo_unexecuted_blocks=1 00:29:29.908 00:29:29.908 ' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.908 --rc genhtml_branch_coverage=1 00:29:29.908 --rc genhtml_function_coverage=1 00:29:29.908 --rc genhtml_legend=1 00:29:29.908 --rc geninfo_all_blocks=1 00:29:29.908 --rc geninfo_unexecuted_blocks=1 00:29:29.908 00:29:29.908 ' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.908 --rc genhtml_branch_coverage=1 00:29:29.908 --rc genhtml_function_coverage=1 00:29:29.908 --rc genhtml_legend=1 00:29:29.908 --rc geninfo_all_blocks=1 00:29:29.908 --rc geninfo_unexecuted_blocks=1 00:29:29.908 00:29:29.908 ' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:29.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.908 --rc genhtml_branch_coverage=1 00:29:29.908 --rc genhtml_function_coverage=1 00:29:29.908 --rc genhtml_legend=1 00:29:29.908 --rc geninfo_all_blocks=1 00:29:29.908 --rc geninfo_unexecuted_blocks=1 00:29:29.908 00:29:29.908 ' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.908 06:19:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.908 ************************************ 00:29:29.908 START TEST nvmf_multicontroller 00:29:29.908 ************************************ 00:29:29.909 06:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:29.909 * Looking for test storage... 00:29:29.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.909 06:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:29.909 06:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:29.909 06:19:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:29.909 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:29.909 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.909 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.909 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.168 --rc genhtml_branch_coverage=1 00:29:30.168 --rc genhtml_function_coverage=1 00:29:30.168 --rc genhtml_legend=1 00:29:30.168 --rc geninfo_all_blocks=1 00:29:30.168 --rc geninfo_unexecuted_blocks=1 00:29:30.168 00:29:30.168 ' 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.168 --rc genhtml_branch_coverage=1 00:29:30.168 --rc genhtml_function_coverage=1 00:29:30.168 --rc genhtml_legend=1 00:29:30.168 --rc geninfo_all_blocks=1 00:29:30.168 --rc geninfo_unexecuted_blocks=1 00:29:30.168 00:29:30.168 ' 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.168 --rc genhtml_branch_coverage=1 00:29:30.168 --rc genhtml_function_coverage=1 00:29:30.168 --rc genhtml_legend=1 00:29:30.168 --rc geninfo_all_blocks=1 00:29:30.168 --rc geninfo_unexecuted_blocks=1 00:29:30.168 00:29:30.168 ' 00:29:30.168 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:30.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.168 --rc genhtml_branch_coverage=1 00:29:30.168 --rc genhtml_function_coverage=1 00:29:30.168 --rc genhtml_legend=1 00:29:30.168 --rc geninfo_all_blocks=1 00:29:30.168 --rc geninfo_unexecuted_blocks=1 00:29:30.169 00:29:30.169 ' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:30.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.169 06:19:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:36.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:36.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:36.737 Found net devices under 0000:af:00.0: cvl_0_0 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:36.737 Found net devices under 0000:af:00.1: cvl_0_1 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.737 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:29:36.738 00:29:36.738 --- 10.0.0.2 ping statistics --- 00:29:36.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.738 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:29:36.738 00:29:36.738 --- 10.0.0.1 ping statistics --- 00:29:36.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.738 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1105159 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1105159 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105159 ']' 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.738 06:19:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 [2024-12-15 06:19:56.027460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:36.738 [2024-12-15 06:19:56.027505] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.738 [2024-12-15 06:19:56.107038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.738 [2024-12-15 06:19:56.129316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.738 [2024-12-15 06:19:56.129351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.738 [2024-12-15 06:19:56.129358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.738 [2024-12-15 06:19:56.129364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.738 [2024-12-15 06:19:56.129369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.738 [2024-12-15 06:19:56.130677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.738 [2024-12-15 06:19:56.130787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.738 [2024-12-15 06:19:56.130788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 [2024-12-15 06:19:56.261215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 Malloc0 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 [2024-12-15 06:19:56.324575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 [2024-12-15 06:19:56.336529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 Malloc1 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:36.738 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1105212 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1105212 /var/tmp/bdevperf.sock 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105212 ']' 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 NVMe0n1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.739 1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 request: 00:29:36.739 { 00:29:36.739 "name": "NVMe0", 00:29:36.739 "trtype": "tcp", 00:29:36.739 "traddr": "10.0.0.2", 00:29:36.739 "adrfam": "ipv4", 00:29:36.739 "trsvcid": "4420", 00:29:36.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.739 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:36.739 "hostaddr": "10.0.0.1", 00:29:36.739 "prchk_reftag": false, 00:29:36.739 "prchk_guard": false, 00:29:36.739 "hdgst": false, 00:29:36.739 "ddgst": false, 00:29:36.739 "allow_unrecognized_csi": false, 00:29:36.739 "method": "bdev_nvme_attach_controller", 00:29:36.739 "req_id": 1 00:29:36.739 } 00:29:36.739 Got JSON-RPC error response 00:29:36.739 response: 00:29:36.739 { 00:29:36.739 "code": -114, 00:29:36.739 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.739 } 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.739 request: 00:29:36.739 { 00:29:36.739 "name": "NVMe0", 00:29:36.739 "trtype": "tcp", 00:29:36.739 "traddr": "10.0.0.2", 00:29:36.739 "adrfam": "ipv4", 00:29:36.739 "trsvcid": "4420", 00:29:36.739 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:36.739 "hostaddr": "10.0.0.1", 00:29:36.739 "prchk_reftag": false, 00:29:36.739 "prchk_guard": false, 00:29:36.739 "hdgst": false, 00:29:36.739 "ddgst": false, 00:29:36.739 "allow_unrecognized_csi": false, 00:29:36.739 "method": "bdev_nvme_attach_controller", 00:29:36.739 "req_id": 1 00:29:36.739 } 00:29:36.739 Got JSON-RPC error response 00:29:36.739 response: 00:29:36.739 { 00:29:36.739 "code": -114, 00:29:36.739 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.739 } 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.739 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.997 request: 00:29:36.997 { 00:29:36.997 "name": "NVMe0", 00:29:36.997 "trtype": "tcp", 00:29:36.997 "traddr": "10.0.0.2", 00:29:36.997 "adrfam": "ipv4", 00:29:36.997 "trsvcid": "4420", 00:29:36.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.997 "hostaddr": "10.0.0.1", 00:29:36.997 "prchk_reftag": false, 00:29:36.997 "prchk_guard": false, 00:29:36.997 "hdgst": false, 00:29:36.997 "ddgst": false, 00:29:36.997 "multipath": "disable", 00:29:36.997 "allow_unrecognized_csi": false, 00:29:36.997 "method": "bdev_nvme_attach_controller", 00:29:36.997 "req_id": 1 00:29:36.997 } 00:29:36.997 Got JSON-RPC error response 00:29:36.997 response: 00:29:36.997 { 00:29:36.997 "code": -114, 00:29:36.997 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:36.997 } 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.997 request: 00:29:36.997 { 00:29:36.997 "name": "NVMe0", 00:29:36.997 "trtype": "tcp", 00:29:36.997 "traddr": "10.0.0.2", 00:29:36.997 "adrfam": "ipv4", 00:29:36.997 "trsvcid": "4420", 00:29:36.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.997 "hostaddr": "10.0.0.1", 00:29:36.997 "prchk_reftag": false, 00:29:36.997 "prchk_guard": false, 00:29:36.997 "hdgst": false, 00:29:36.997 "ddgst": false, 00:29:36.997 "multipath": "failover", 00:29:36.997 "allow_unrecognized_csi": false, 00:29:36.997 "method": "bdev_nvme_attach_controller", 00:29:36.997 "req_id": 1 00:29:36.997 } 00:29:36.997 Got JSON-RPC error response 00:29:36.997 response: 00:29:36.997 { 00:29:36.997 "code": -114, 00:29:36.997 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.997 } 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.997 06:19:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.255 NVMe0n1 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.255 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:37.255 06:19:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.628 { 00:29:38.628 "results": [ 00:29:38.628 { 00:29:38.628 "job": "NVMe0n1", 00:29:38.628 "core_mask": "0x1", 00:29:38.628 "workload": "write", 00:29:38.628 "status": "finished", 00:29:38.628 "queue_depth": 128, 00:29:38.628 "io_size": 4096, 00:29:38.628 "runtime": 1.007796, 00:29:38.628 "iops": 25222.366431301572, 00:29:38.628 "mibps": 98.52486887227177, 00:29:38.628 "io_failed": 0, 00:29:38.628 "io_timeout": 0, 00:29:38.628 "avg_latency_us": 5069.058264103155, 00:29:38.628 "min_latency_us": 2995.9314285714286, 00:29:38.628 "max_latency_us": 10048.853333333333 00:29:38.628 } 00:29:38.628 ], 00:29:38.628 "core_count": 1 00:29:38.628 } 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105212 ']' 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105212' 00:29:38.628 killing process with pid 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105212 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:38.628 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:38.628 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:38.629 [2024-12-15 06:19:56.439377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:38.629 [2024-12-15 06:19:56.439424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105212 ] 00:29:38.629 [2024-12-15 06:19:56.513769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.629 [2024-12-15 06:19:56.536387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.629 [2024-12-15 06:19:57.261911] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 21ed816e-768f-4695-8110-72ecedfc6ebf already exists 00:29:38.629 [2024-12-15 06:19:57.261940] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:21ed816e-768f-4695-8110-72ecedfc6ebf alias for bdev NVMe1n1 00:29:38.629 [2024-12-15 06:19:57.261949] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:38.629 Running I/O for 1 seconds... 00:29:38.629 25164.00 IOPS, 98.30 MiB/s 00:29:38.629 Latency(us) 00:29:38.629 [2024-12-15T05:19:58.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.629 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:38.629 NVMe0n1 : 1.01 25222.37 98.52 0.00 0.00 5069.06 2995.93 10048.85 00:29:38.629 [2024-12-15T05:19:58.769Z] =================================================================================================================== 00:29:38.629 [2024-12-15T05:19:58.769Z] Total : 25222.37 98.52 0.00 0.00 5069.06 2995.93 10048.85 00:29:38.629 Received shutdown signal, test time was about 1.000000 seconds 00:29:38.629 00:29:38.629 Latency(us) 00:29:38.629 [2024-12-15T05:19:58.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.629 [2024-12-15T05:19:58.769Z] =================================================================================================================== 00:29:38.629 [2024-12-15T05:19:58.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.629 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.629 rmmod nvme_tcp 00:29:38.629 rmmod nvme_fabrics 00:29:38.629 rmmod nvme_keyring 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1105159 ']' 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1105159 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105159 ']' 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105159 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.629 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105159 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105159' 00:29:38.888 killing process with pid 1105159 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105159 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105159 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.888 06:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.424 00:29:41.424 real 0m11.178s 00:29:41.424 user 0m12.642s 00:29:41.424 sys 0m5.118s 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:41.424 ************************************ 00:29:41.424 END TEST nvmf_multicontroller 00:29:41.424 ************************************ 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.424 ************************************ 00:29:41.424 START TEST nvmf_aer 00:29:41.424 ************************************ 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:41.424 * Looking for test storage... 00:29:41.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.424 --rc genhtml_branch_coverage=1 00:29:41.424 --rc genhtml_function_coverage=1 00:29:41.424 --rc genhtml_legend=1 00:29:41.424 --rc geninfo_all_blocks=1 00:29:41.424 --rc geninfo_unexecuted_blocks=1 00:29:41.424 00:29:41.424 ' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.424 --rc genhtml_branch_coverage=1 00:29:41.424 --rc genhtml_function_coverage=1 00:29:41.424 --rc genhtml_legend=1 00:29:41.424 --rc geninfo_all_blocks=1 00:29:41.424 --rc geninfo_unexecuted_blocks=1 00:29:41.424 00:29:41.424 ' 00:29:41.424 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.424 --rc genhtml_branch_coverage=1 00:29:41.424 --rc genhtml_function_coverage=1 00:29:41.424 --rc genhtml_legend=1 00:29:41.424 --rc geninfo_all_blocks=1 00:29:41.424 --rc geninfo_unexecuted_blocks=1 00:29:41.424 00:29:41.425 ' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.425 --rc genhtml_branch_coverage=1 00:29:41.425 --rc genhtml_function_coverage=1 00:29:41.425 --rc genhtml_legend=1 00:29:41.425 --rc geninfo_all_blocks=1 00:29:41.425 --rc geninfo_unexecuted_blocks=1 00:29:41.425 00:29:41.425 ' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.425 06:20:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.997 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.997 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.997 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.998 06:20:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:29:47.998 00:29:47.998 --- 10.0.0.2 ping statistics --- 00:29:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.998 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:29:47.998 00:29:47.998 --- 10.0.0.1 ping statistics --- 00:29:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.998 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1108972 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1108972 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1108972 ']' 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 [2024-12-15 06:20:07.276342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:47.998 [2024-12-15 06:20:07.276388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.998 [2024-12-15 06:20:07.355561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.998 [2024-12-15 06:20:07.378456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.998 [2024-12-15 06:20:07.378493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.998 [2024-12-15 06:20:07.378500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.998 [2024-12-15 06:20:07.378506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.998 [2024-12-15 06:20:07.378511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.998 [2024-12-15 06:20:07.379807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.998 [2024-12-15 06:20:07.379922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.998 [2024-12-15 06:20:07.380041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.998 [2024-12-15 06:20:07.380042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 [2024-12-15 06:20:07.511696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 Malloc0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 [2024-12-15 06:20:07.573562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.998 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.998 [ 00:29:47.998 { 00:29:47.998 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.998 "subtype": "Discovery", 00:29:47.998 "listen_addresses": [], 00:29:47.998 "allow_any_host": true, 00:29:47.998 "hosts": [] 00:29:47.998 }, 00:29:47.998 { 00:29:47.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.998 "subtype": "NVMe", 00:29:47.998 "listen_addresses": [ 00:29:47.998 { 00:29:47.998 "trtype": "TCP", 00:29:47.999 "adrfam": "IPv4", 00:29:47.999 "traddr": "10.0.0.2", 00:29:47.999 "trsvcid": "4420" 00:29:47.999 } 00:29:47.999 ], 00:29:47.999 "allow_any_host": true, 00:29:47.999 "hosts": [], 00:29:47.999 "serial_number": "SPDK00000000000001", 00:29:47.999 "model_number": "SPDK bdev Controller", 00:29:47.999 "max_namespaces": 2, 00:29:47.999 "min_cntlid": 1, 00:29:47.999 "max_cntlid": 65519, 00:29:47.999 "namespaces": [ 00:29:47.999 { 00:29:47.999 "nsid": 1, 00:29:47.999 "bdev_name": "Malloc0", 00:29:47.999 "name": "Malloc0", 00:29:47.999 "nguid": "3DAC6C18D2AC42A092153F8F0BAD681A", 00:29:47.999 "uuid": "3dac6c18-d2ac-42a0-9215-3f8f0bad681a" 00:29:47.999 } 00:29:47.999 ] 00:29:47.999 } 00:29:47.999 ] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1109164 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 Malloc1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 Asynchronous Event Request test 00:29:47.999 Attaching to 10.0.0.2 00:29:47.999 Attached to 10.0.0.2 00:29:47.999 Registering asynchronous event callbacks... 00:29:47.999 Starting namespace attribute notice tests for all controllers... 00:29:47.999 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:47.999 aer_cb - Changed Namespace 00:29:47.999 Cleaning up... 00:29:47.999 [ 00:29:47.999 { 00:29:47.999 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:47.999 "subtype": "Discovery", 00:29:47.999 "listen_addresses": [], 00:29:47.999 "allow_any_host": true, 00:29:47.999 "hosts": [] 00:29:47.999 }, 00:29:47.999 { 00:29:47.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:47.999 "subtype": "NVMe", 00:29:47.999 "listen_addresses": [ 00:29:47.999 { 00:29:47.999 "trtype": "TCP", 00:29:47.999 "adrfam": "IPv4", 00:29:47.999 "traddr": "10.0.0.2", 00:29:47.999 "trsvcid": "4420" 00:29:47.999 } 00:29:47.999 ], 00:29:47.999 "allow_any_host": true, 00:29:47.999 "hosts": [], 00:29:47.999 "serial_number": "SPDK00000000000001", 00:29:47.999 "model_number": "SPDK bdev Controller", 00:29:47.999 "max_namespaces": 2, 00:29:47.999 "min_cntlid": 1, 00:29:47.999 "max_cntlid": 65519, 00:29:47.999 "namespaces": [ 00:29:47.999 { 00:29:47.999 "nsid": 1, 00:29:47.999 "bdev_name": "Malloc0", 00:29:47.999 "name": "Malloc0", 00:29:47.999 "nguid": "3DAC6C18D2AC42A092153F8F0BAD681A", 00:29:47.999 "uuid": "3dac6c18-d2ac-42a0-9215-3f8f0bad681a" 00:29:47.999 }, 00:29:47.999 { 00:29:47.999 "nsid": 2, 00:29:47.999 "bdev_name": "Malloc1", 00:29:47.999 "name": "Malloc1", 00:29:47.999 "nguid": "21CB867FAAC34F87BF86CCEBBC85746E", 00:29:47.999 "uuid": "21cb867f-aac3-4f87-bf86-ccebbc85746e" 00:29:47.999 } 00:29:47.999 ] 00:29:47.999 } 00:29:47.999 ] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1109164 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.999 rmmod nvme_tcp 00:29:47.999 rmmod nvme_fabrics 00:29:47.999 rmmod nvme_keyring 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1108972 ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1108972 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1108972 ']' 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1108972 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:47.999 06:20:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1108972 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1108972' 00:29:47.999 killing process with pid 1108972 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1108972 00:29:47.999 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1108972 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.259 06:20:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.163 06:20:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.163 00:29:50.163 real 0m9.147s 00:29:50.163 user 0m5.022s 00:29:50.163 sys 0m4.822s 00:29:50.163 06:20:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.163 06:20:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:50.163 ************************************ 00:29:50.163 END TEST nvmf_aer 00:29:50.163 ************************************ 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.422 ************************************ 00:29:50.422 START TEST nvmf_async_init 00:29:50.422 ************************************ 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:50.422 * Looking for test storage... 00:29:50.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.422 --rc genhtml_branch_coverage=1 00:29:50.422 --rc genhtml_function_coverage=1 00:29:50.422 --rc genhtml_legend=1 00:29:50.422 --rc geninfo_all_blocks=1 00:29:50.422 --rc geninfo_unexecuted_blocks=1 00:29:50.422 00:29:50.422 ' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.422 --rc genhtml_branch_coverage=1 00:29:50.422 --rc genhtml_function_coverage=1 00:29:50.422 --rc genhtml_legend=1 00:29:50.422 --rc geninfo_all_blocks=1 00:29:50.422 --rc geninfo_unexecuted_blocks=1 00:29:50.422 00:29:50.422 ' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.422 --rc genhtml_branch_coverage=1 00:29:50.422 --rc genhtml_function_coverage=1 00:29:50.422 --rc genhtml_legend=1 00:29:50.422 --rc geninfo_all_blocks=1 00:29:50.422 --rc geninfo_unexecuted_blocks=1 00:29:50.422 00:29:50.422 ' 00:29:50.422 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.422 --rc genhtml_branch_coverage=1 00:29:50.422 --rc genhtml_function_coverage=1 00:29:50.422 --rc genhtml_legend=1 00:29:50.422 --rc geninfo_all_blocks=1 00:29:50.422 --rc geninfo_unexecuted_blocks=1 00:29:50.422 00:29:50.422 ' 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.423 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8221973416214abfa46de21f774e9d76 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.681 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.682 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.682 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.682 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.682 06:20:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:57.247 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:57.247 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:57.247 Found net devices under 0000:af:00.0: cvl_0_0 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.247 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:57.248 Found net devices under 0000:af:00.1: cvl_0_1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:29:57.248 00:29:57.248 --- 10.0.0.2 ping statistics --- 00:29:57.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.248 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:57.248 00:29:57.248 --- 10.0.0.1 ping statistics --- 00:29:57.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.248 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1112625 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1112625 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1112625 ']' 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 [2024-12-15 06:20:16.545806] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:57.248 [2024-12-15 06:20:16.545849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.248 [2024-12-15 06:20:16.625551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.248 [2024-12-15 06:20:16.647125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.248 [2024-12-15 06:20:16.647164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.248 [2024-12-15 06:20:16.647171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.248 [2024-12-15 06:20:16.647177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.248 [2024-12-15 06:20:16.647183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.248 [2024-12-15 06:20:16.647668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 [2024-12-15 06:20:16.778736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 null0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8221973416214abfa46de21f774e9d76 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 [2024-12-15 06:20:16.831004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.248 06:20:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.248 nvme0n1 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 [ 00:29:57.249 { 00:29:57.249 "name": "nvme0n1", 00:29:57.249 "aliases": [ 00:29:57.249 "82219734-1621-4abf-a46d-e21f774e9d76" 00:29:57.249 ], 00:29:57.249 "product_name": "NVMe disk", 00:29:57.249 "block_size": 512, 00:29:57.249 "num_blocks": 2097152, 00:29:57.249 "uuid": "82219734-1621-4abf-a46d-e21f774e9d76", 00:29:57.249 "numa_id": 1, 00:29:57.249 "assigned_rate_limits": { 00:29:57.249 "rw_ios_per_sec": 0, 00:29:57.249 "rw_mbytes_per_sec": 0, 00:29:57.249 "r_mbytes_per_sec": 0, 00:29:57.249 "w_mbytes_per_sec": 0 00:29:57.249 }, 00:29:57.249 "claimed": false, 00:29:57.249 "zoned": false, 00:29:57.249 "supported_io_types": { 00:29:57.249 "read": true, 00:29:57.249 "write": true, 00:29:57.249 "unmap": false, 00:29:57.249 "flush": true, 00:29:57.249 "reset": true, 00:29:57.249 "nvme_admin": true, 00:29:57.249 "nvme_io": true, 00:29:57.249 "nvme_io_md": false, 00:29:57.249 "write_zeroes": true, 00:29:57.249 "zcopy": false, 00:29:57.249 "get_zone_info": false, 00:29:57.249 "zone_management": false, 00:29:57.249 "zone_append": false, 00:29:57.249 "compare": true, 00:29:57.249 "compare_and_write": true, 00:29:57.249 "abort": true, 00:29:57.249 "seek_hole": false, 00:29:57.249 "seek_data": false, 00:29:57.249 "copy": true, 00:29:57.249 "nvme_iov_md": false 00:29:57.249 }, 00:29:57.249 "memory_domains": [ 00:29:57.249 { 00:29:57.249 "dma_device_id": "system", 00:29:57.249 "dma_device_type": 1 00:29:57.249 } 00:29:57.249 ], 00:29:57.249 "driver_specific": { 00:29:57.249 "nvme": [ 00:29:57.249 { 00:29:57.249 "trid": { 00:29:57.249 "trtype": "TCP", 00:29:57.249 "adrfam": "IPv4", 00:29:57.249 "traddr": "10.0.0.2", 00:29:57.249 "trsvcid": "4420", 00:29:57.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:57.249 }, 00:29:57.249 "ctrlr_data": { 00:29:57.249 "cntlid": 1, 00:29:57.249 "vendor_id": "0x8086", 00:29:57.249 "model_number": "SPDK bdev Controller", 00:29:57.249 "serial_number": "00000000000000000000", 00:29:57.249 "firmware_revision": "25.01", 00:29:57.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:57.249 "oacs": { 00:29:57.249 "security": 0, 00:29:57.249 "format": 0, 00:29:57.249 "firmware": 0, 00:29:57.249 "ns_manage": 0 00:29:57.249 }, 00:29:57.249 "multi_ctrlr": true, 00:29:57.249 "ana_reporting": false 00:29:57.249 }, 00:29:57.249 "vs": { 00:29:57.249 "nvme_version": "1.3" 00:29:57.249 }, 00:29:57.249 "ns_data": { 00:29:57.249 "id": 1, 00:29:57.249 "can_share": true 00:29:57.249 } 00:29:57.249 } 00:29:57.249 ], 00:29:57.249 "mp_policy": "active_passive" 00:29:57.249 } 00:29:57.249 } 00:29:57.249 ] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 [2024-12-15 06:20:17.099547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:57.249 [2024-12-15 06:20:17.099602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e9a90 (9): Bad file descriptor 00:29:57.249 [2024-12-15 06:20:17.231072] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 [ 00:29:57.249 { 00:29:57.249 "name": "nvme0n1", 00:29:57.249 "aliases": [ 00:29:57.249 "82219734-1621-4abf-a46d-e21f774e9d76" 00:29:57.249 ], 00:29:57.249 "product_name": "NVMe disk", 00:29:57.249 "block_size": 512, 00:29:57.249 "num_blocks": 2097152, 00:29:57.249 "uuid": "82219734-1621-4abf-a46d-e21f774e9d76", 00:29:57.249 "numa_id": 1, 00:29:57.249 "assigned_rate_limits": { 00:29:57.249 "rw_ios_per_sec": 0, 00:29:57.249 "rw_mbytes_per_sec": 0, 00:29:57.249 "r_mbytes_per_sec": 0, 00:29:57.249 "w_mbytes_per_sec": 0 00:29:57.249 }, 00:29:57.249 "claimed": false, 00:29:57.249 "zoned": false, 00:29:57.249 "supported_io_types": { 00:29:57.249 "read": true, 00:29:57.249 "write": true, 00:29:57.249 "unmap": false, 00:29:57.249 "flush": true, 00:29:57.249 "reset": true, 00:29:57.249 "nvme_admin": true, 00:29:57.249 "nvme_io": true, 00:29:57.249 "nvme_io_md": false, 00:29:57.249 "write_zeroes": true, 00:29:57.249 "zcopy": false, 00:29:57.249 "get_zone_info": false, 00:29:57.249 "zone_management": false, 00:29:57.249 "zone_append": false, 00:29:57.249 "compare": true, 00:29:57.249 "compare_and_write": true, 00:29:57.249 "abort": true, 00:29:57.249 "seek_hole": false, 00:29:57.249 "seek_data": false, 00:29:57.249 "copy": true, 00:29:57.249 "nvme_iov_md": false 00:29:57.249 }, 00:29:57.249 "memory_domains": [ 00:29:57.249 { 00:29:57.249 "dma_device_id": "system", 00:29:57.249 "dma_device_type": 1 00:29:57.249 } 00:29:57.249 ], 00:29:57.249 "driver_specific": { 00:29:57.249 "nvme": [ 00:29:57.249 { 00:29:57.249 "trid": { 00:29:57.249 "trtype": "TCP", 00:29:57.249 "adrfam": "IPv4", 00:29:57.249 "traddr": "10.0.0.2", 00:29:57.249 "trsvcid": "4420", 00:29:57.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:57.249 }, 00:29:57.249 "ctrlr_data": { 00:29:57.249 "cntlid": 2, 00:29:57.249 "vendor_id": "0x8086", 00:29:57.249 "model_number": "SPDK bdev Controller", 00:29:57.249 "serial_number": "00000000000000000000", 00:29:57.249 "firmware_revision": "25.01", 00:29:57.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:57.249 "oacs": { 00:29:57.249 "security": 0, 00:29:57.249 "format": 0, 00:29:57.249 "firmware": 0, 00:29:57.249 "ns_manage": 0 00:29:57.249 }, 00:29:57.249 "multi_ctrlr": true, 00:29:57.249 "ana_reporting": false 00:29:57.249 }, 00:29:57.249 "vs": { 00:29:57.249 "nvme_version": "1.3" 00:29:57.249 }, 00:29:57.249 "ns_data": { 00:29:57.249 "id": 1, 00:29:57.249 "can_share": true 00:29:57.249 } 00:29:57.249 } 00:29:57.249 ], 00:29:57.249 "mp_policy": "active_passive" 00:29:57.249 } 00:29:57.249 } 00:29:57.249 ] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Q5AB6UVTk2 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Q5AB6UVTk2 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Q5AB6UVTk2 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 [2024-12-15 06:20:17.304157] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:57.249 [2024-12-15 06:20:17.304247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.249 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.249 [2024-12-15 06:20:17.324226] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:57.508 nvme0n1 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.508 [ 00:29:57.508 { 00:29:57.508 "name": "nvme0n1", 00:29:57.508 "aliases": [ 00:29:57.508 "82219734-1621-4abf-a46d-e21f774e9d76" 00:29:57.508 ], 00:29:57.508 "product_name": "NVMe disk", 00:29:57.508 "block_size": 512, 00:29:57.508 "num_blocks": 2097152, 00:29:57.508 "uuid": "82219734-1621-4abf-a46d-e21f774e9d76", 00:29:57.508 "numa_id": 1, 00:29:57.508 "assigned_rate_limits": { 00:29:57.508 "rw_ios_per_sec": 0, 00:29:57.508 "rw_mbytes_per_sec": 0, 00:29:57.508 "r_mbytes_per_sec": 0, 00:29:57.508 "w_mbytes_per_sec": 0 00:29:57.508 }, 00:29:57.508 "claimed": false, 00:29:57.508 "zoned": false, 00:29:57.508 "supported_io_types": { 00:29:57.508 "read": true, 00:29:57.508 "write": true, 00:29:57.508 "unmap": false, 00:29:57.508 "flush": true, 00:29:57.508 "reset": true, 00:29:57.508 "nvme_admin": true, 00:29:57.508 "nvme_io": true, 00:29:57.508 "nvme_io_md": false, 00:29:57.508 "write_zeroes": true, 00:29:57.508 "zcopy": false, 00:29:57.508 "get_zone_info": false, 00:29:57.508 "zone_management": false, 00:29:57.508 "zone_append": false, 00:29:57.508 "compare": true, 00:29:57.508 "compare_and_write": true, 00:29:57.508 "abort": true, 00:29:57.508 "seek_hole": false, 00:29:57.508 "seek_data": false, 00:29:57.508 "copy": true, 00:29:57.508 "nvme_iov_md": false 00:29:57.508 }, 00:29:57.508 "memory_domains": [ 00:29:57.508 { 00:29:57.508 "dma_device_id": "system", 00:29:57.508 "dma_device_type": 1 00:29:57.508 } 00:29:57.508 ], 00:29:57.508 "driver_specific": { 00:29:57.508 "nvme": [ 00:29:57.508 { 00:29:57.508 "trid": { 00:29:57.508 "trtype": "TCP", 00:29:57.508 "adrfam": "IPv4", 00:29:57.508 "traddr": "10.0.0.2", 00:29:57.508 "trsvcid": "4421", 00:29:57.508 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:57.508 }, 00:29:57.508 "ctrlr_data": { 00:29:57.508 "cntlid": 3, 00:29:57.508 "vendor_id": "0x8086", 00:29:57.508 "model_number": "SPDK bdev Controller", 00:29:57.508 "serial_number": "00000000000000000000", 00:29:57.508 "firmware_revision": "25.01", 00:29:57.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:57.508 "oacs": { 00:29:57.508 "security": 0, 00:29:57.508 "format": 0, 00:29:57.508 "firmware": 0, 00:29:57.508 "ns_manage": 0 00:29:57.508 }, 00:29:57.508 "multi_ctrlr": true, 00:29:57.508 "ana_reporting": false 00:29:57.508 }, 00:29:57.508 "vs": { 00:29:57.508 "nvme_version": "1.3" 00:29:57.508 }, 00:29:57.508 "ns_data": { 00:29:57.508 "id": 1, 00:29:57.508 "can_share": true 00:29:57.508 } 00:29:57.508 } 00:29:57.508 ], 00:29:57.508 "mp_policy": "active_passive" 00:29:57.508 } 00:29:57.508 } 00:29:57.508 ] 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Q5AB6UVTk2 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.508 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.509 rmmod nvme_tcp 00:29:57.509 rmmod nvme_fabrics 00:29:57.509 rmmod nvme_keyring 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1112625 ']' 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1112625 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1112625 ']' 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1112625 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112625 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112625' 00:29:57.509 killing process with pid 1112625 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1112625 00:29:57.509 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1112625 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.767 06:20:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.746 00:29:59.746 real 0m9.413s 00:29:59.746 user 0m3.067s 00:29:59.746 sys 0m4.742s 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:59.746 ************************************ 00:29:59.746 END TEST nvmf_async_init 00:29:59.746 ************************************ 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.746 ************************************ 00:29:59.746 START TEST dma 00:29:59.746 ************************************ 00:29:59.746 06:20:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:00.006 * Looking for test storage... 00:30:00.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.006 06:20:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:00.006 06:20:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:00.006 06:20:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.006 --rc genhtml_branch_coverage=1 00:30:00.006 --rc genhtml_function_coverage=1 00:30:00.006 --rc genhtml_legend=1 00:30:00.006 --rc geninfo_all_blocks=1 00:30:00.006 --rc geninfo_unexecuted_blocks=1 00:30:00.006 00:30:00.006 ' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.006 --rc genhtml_branch_coverage=1 00:30:00.006 --rc genhtml_function_coverage=1 00:30:00.006 --rc genhtml_legend=1 00:30:00.006 --rc geninfo_all_blocks=1 00:30:00.006 --rc geninfo_unexecuted_blocks=1 00:30:00.006 00:30:00.006 ' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.006 --rc genhtml_branch_coverage=1 00:30:00.006 --rc genhtml_function_coverage=1 00:30:00.006 --rc genhtml_legend=1 00:30:00.006 --rc geninfo_all_blocks=1 00:30:00.006 --rc geninfo_unexecuted_blocks=1 00:30:00.006 00:30:00.006 ' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:00.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.006 --rc genhtml_branch_coverage=1 00:30:00.006 --rc genhtml_function_coverage=1 00:30:00.006 --rc genhtml_legend=1 00:30:00.006 --rc geninfo_all_blocks=1 00:30:00.006 --rc geninfo_unexecuted_blocks=1 00:30:00.006 00:30:00.006 ' 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.006 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:00.007 00:30:00.007 real 0m0.215s 00:30:00.007 user 0m0.142s 00:30:00.007 sys 0m0.086s 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:00.007 ************************************ 00:30:00.007 END TEST dma 00:30:00.007 ************************************ 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.007 ************************************ 00:30:00.007 START TEST nvmf_identify 00:30:00.007 ************************************ 00:30:00.007 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:00.267 * Looking for test storage... 00:30:00.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:00.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.267 --rc genhtml_branch_coverage=1 00:30:00.267 --rc genhtml_function_coverage=1 00:30:00.267 --rc genhtml_legend=1 00:30:00.267 --rc geninfo_all_blocks=1 00:30:00.267 --rc geninfo_unexecuted_blocks=1 00:30:00.267 00:30:00.267 ' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:00.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.267 --rc genhtml_branch_coverage=1 00:30:00.267 --rc genhtml_function_coverage=1 00:30:00.267 --rc genhtml_legend=1 00:30:00.267 --rc geninfo_all_blocks=1 00:30:00.267 --rc geninfo_unexecuted_blocks=1 00:30:00.267 00:30:00.267 ' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:00.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.267 --rc genhtml_branch_coverage=1 00:30:00.267 --rc genhtml_function_coverage=1 00:30:00.267 --rc genhtml_legend=1 00:30:00.267 --rc geninfo_all_blocks=1 00:30:00.267 --rc geninfo_unexecuted_blocks=1 00:30:00.267 00:30:00.267 ' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:00.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.267 --rc genhtml_branch_coverage=1 00:30:00.267 --rc genhtml_function_coverage=1 00:30:00.267 --rc genhtml_legend=1 00:30:00.267 --rc geninfo_all_blocks=1 00:30:00.267 --rc geninfo_unexecuted_blocks=1 00:30:00.267 00:30:00.267 ' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.267 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.268 06:20:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:06.840 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:06.840 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:06.840 Found net devices under 0000:af:00.0: cvl_0_0 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:06.840 Found net devices under 0000:af:00.1: cvl_0_1 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.840 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.841 06:20:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:30:06.841 00:30:06.841 --- 10.0.0.2 ping statistics --- 00:30:06.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.841 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:30:06.841 00:30:06.841 --- 10.0.0.1 ping statistics --- 00:30:06.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.841 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1116388 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1116388 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1116388 ']' 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 [2024-12-15 06:20:26.340822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:06.841 [2024-12-15 06:20:26.340870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.841 [2024-12-15 06:20:26.419974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.841 [2024-12-15 06:20:26.443717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.841 [2024-12-15 06:20:26.443755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.841 [2024-12-15 06:20:26.443762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.841 [2024-12-15 06:20:26.443768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.841 [2024-12-15 06:20:26.443773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.841 [2024-12-15 06:20:26.445108] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.841 [2024-12-15 06:20:26.445215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.841 [2024-12-15 06:20:26.445324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.841 [2024-12-15 06:20:26.445325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 [2024-12-15 06:20:26.536602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 Malloc0 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 [2024-12-15 06:20:26.639758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.841 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 [ 00:30:06.841 { 00:30:06.841 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:06.841 "subtype": "Discovery", 00:30:06.841 "listen_addresses": [ 00:30:06.841 { 00:30:06.841 "trtype": "TCP", 00:30:06.841 "adrfam": "IPv4", 00:30:06.841 "traddr": "10.0.0.2", 00:30:06.841 "trsvcid": "4420" 00:30:06.841 } 00:30:06.841 ], 00:30:06.841 "allow_any_host": true, 00:30:06.841 "hosts": [] 00:30:06.841 }, 00:30:06.841 { 00:30:06.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.841 "subtype": "NVMe", 00:30:06.841 "listen_addresses": [ 00:30:06.841 { 00:30:06.841 "trtype": "TCP", 00:30:06.841 "adrfam": "IPv4", 00:30:06.841 "traddr": "10.0.0.2", 00:30:06.841 "trsvcid": "4420" 00:30:06.841 } 00:30:06.841 ], 00:30:06.841 "allow_any_host": true, 00:30:06.841 "hosts": [], 00:30:06.841 "serial_number": "SPDK00000000000001", 00:30:06.841 "model_number": "SPDK bdev Controller", 00:30:06.842 "max_namespaces": 32, 00:30:06.842 "min_cntlid": 1, 00:30:06.842 "max_cntlid": 65519, 00:30:06.842 "namespaces": [ 00:30:06.842 { 00:30:06.842 "nsid": 1, 00:30:06.842 "bdev_name": "Malloc0", 00:30:06.842 "name": "Malloc0", 00:30:06.842 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:06.842 "eui64": "ABCDEF0123456789", 00:30:06.842 "uuid": "9d20f355-c7f9-45bd-bfbf-1835c1c07db9" 00:30:06.842 } 00:30:06.842 ] 00:30:06.842 } 00:30:06.842 ] 00:30:06.842 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.842 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:06.842 [2024-12-15 06:20:26.695237] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:06.842 [2024-12-15 06:20:26.695278] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116414 ] 00:30:06.842 [2024-12-15 06:20:26.736484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:06.842 [2024-12-15 06:20:26.736527] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.842 [2024-12-15 06:20:26.736532] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.842 [2024-12-15 06:20:26.736543] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.842 [2024-12-15 06:20:26.736551] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.842 [2024-12-15 06:20:26.740222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:06.842 [2024-12-15 06:20:26.740256] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcbbed0 0 00:30:06.842 [2024-12-15 06:20:26.748005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.842 [2024-12-15 06:20:26.748019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.842 [2024-12-15 06:20:26.748023] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.842 [2024-12-15 06:20:26.748026] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.842 [2024-12-15 06:20:26.748056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.748061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.748065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.748079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.842 [2024-12-15 06:20:26.748095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755030] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.842 [2024-12-15 06:20:26.755036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:06.842 [2024-12-15 06:20:26.755042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:06.842 [2024-12-15 06:20:26.755052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:06.842 [2024-12-15 06:20:26.755272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:06.842 [2024-12-15 06:20:26.755278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755405] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:06.842 [2024-12-15 06:20:26.755412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755707] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.842 [2024-12-15 06:20:26.755711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755826] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:06.842 [2024-12-15 06:20:26.755830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.755933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.755938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.755941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.755949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.842 [2024-12-15 06:20:26.755958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.755965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.842 [2024-12-15 06:20:26.755970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.842 [2024-12-15 06:20:26.755979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.842 [2024-12-15 06:20:26.756077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.842 [2024-12-15 06:20:26.756083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.842 [2024-12-15 06:20:26.756086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.842 [2024-12-15 06:20:26.756090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.842 [2024-12-15 06:20:26.756094] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.842 [2024-12-15 06:20:26.756098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:06.843 [2024-12-15 06:20:26.756112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.843 [2024-12-15 06:20:26.756139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.843 [2024-12-15 06:20:26.756231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.843 [2024-12-15 06:20:26.756237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.843 [2024-12-15 06:20:26.756241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcbbed0): datao=0, datal=4096, cccid=0 00:30:06.843 [2024-12-15 06:20:26.756249] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27540) on tqpair(0xcbbed0): expected_datao=0, payload_size=4096 00:30:06.843 [2024-12-15 06:20:26.756253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756260] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756264] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.843 [2024-12-15 06:20:26.756284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.843 [2024-12-15 06:20:26.756287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.843 [2024-12-15 06:20:26.756298] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:06.843 [2024-12-15 06:20:26.756302] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:06.843 [2024-12-15 06:20:26.756306] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:06.843 [2024-12-15 06:20:26.756311] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:06.843 [2024-12-15 06:20:26.756315] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:06.843 [2024-12-15 06:20:26.756319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.843 [2024-12-15 06:20:26.756362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.843 [2024-12-15 06:20:26.756430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.843 [2024-12-15 06:20:26.756436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.843 [2024-12-15 06:20:26.756439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.843 [2024-12-15 06:20:26.756448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.843 [2024-12-15 06:20:26.756467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.843 [2024-12-15 06:20:26.756483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.843 [2024-12-15 06:20:26.756500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.843 [2024-12-15 06:20:26.756515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.843 [2024-12-15 06:20:26.756531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.843 [2024-12-15 06:20:26.756550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27540, cid 0, qid 0 00:30:06.843 [2024-12-15 06:20:26.756555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd276c0, cid 1, qid 0 00:30:06.843 [2024-12-15 06:20:26.756559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27840, cid 2, qid 0 00:30:06.843 [2024-12-15 06:20:26.756563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.843 [2024-12-15 06:20:26.756567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b40, cid 4, qid 0 00:30:06.843 [2024-12-15 06:20:26.756682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.843 [2024-12-15 06:20:26.756687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.843 [2024-12-15 06:20:26.756690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b40) on tqpair=0xcbbed0 00:30:06.843 [2024-12-15 06:20:26.756698] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:06.843 [2024-12-15 06:20:26.756703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:06.843 [2024-12-15 06:20:26.756711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.843 [2024-12-15 06:20:26.756731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b40, cid 4, qid 0 00:30:06.843 [2024-12-15 06:20:26.756796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.843 [2024-12-15 06:20:26.756802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.843 [2024-12-15 06:20:26.756805] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756808] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcbbed0): datao=0, datal=4096, cccid=4 00:30:06.843 [2024-12-15 06:20:26.756812] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27b40) on tqpair(0xcbbed0): expected_datao=0, payload_size=4096 00:30:06.843 [2024-12-15 06:20:26.756816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756833] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.843 [2024-12-15 06:20:26.756887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.843 [2024-12-15 06:20:26.756890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b40) on tqpair=0xcbbed0 00:30:06.843 [2024-12-15 06:20:26.756903] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:06.843 [2024-12-15 06:20:26.756925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.843 [2024-12-15 06:20:26.756941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.756948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcbbed0) 00:30:06.843 [2024-12-15 06:20:26.756953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.843 [2024-12-15 06:20:26.756966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b40, cid 4, qid 0 00:30:06.843 [2024-12-15 06:20:26.756970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27cc0, cid 5, qid 0 00:30:06.843 [2024-12-15 06:20:26.757078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.843 [2024-12-15 06:20:26.757084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.843 [2024-12-15 06:20:26.757087] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.757090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcbbed0): datao=0, datal=1024, cccid=4 00:30:06.843 [2024-12-15 06:20:26.757094] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27b40) on tqpair(0xcbbed0): expected_datao=0, payload_size=1024 00:30:06.843 [2024-12-15 06:20:26.757098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.757104] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.757107] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.843 [2024-12-15 06:20:26.757112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.843 [2024-12-15 06:20:26.757117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.843 [2024-12-15 06:20:26.757120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.757123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27cc0) on tqpair=0xcbbed0 00:30:06.844 [2024-12-15 06:20:26.799173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.844 [2024-12-15 06:20:26.799184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.844 [2024-12-15 06:20:26.799187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b40) on tqpair=0xcbbed0 00:30:06.844 [2024-12-15 06:20:26.799201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcbbed0) 00:30:06.844 [2024-12-15 06:20:26.799212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.844 [2024-12-15 06:20:26.799228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b40, cid 4, qid 0 00:30:06.844 [2024-12-15 06:20:26.799310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.844 [2024-12-15 06:20:26.799316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.844 [2024-12-15 06:20:26.799319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799323] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcbbed0): datao=0, datal=3072, cccid=4 00:30:06.844 [2024-12-15 06:20:26.799327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27b40) on tqpair(0xcbbed0): expected_datao=0, payload_size=3072 00:30:06.844 [2024-12-15 06:20:26.799331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799337] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799340] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.844 [2024-12-15 06:20:26.799394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.844 [2024-12-15 06:20:26.799398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b40) on tqpair=0xcbbed0 00:30:06.844 [2024-12-15 06:20:26.799407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcbbed0) 00:30:06.844 [2024-12-15 06:20:26.799416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.844 [2024-12-15 06:20:26.799430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b40, cid 4, qid 0 00:30:06.844 [2024-12-15 06:20:26.799522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.844 [2024-12-15 06:20:26.799528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.844 [2024-12-15 06:20:26.799531] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799534] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcbbed0): datao=0, datal=8, cccid=4 00:30:06.844 [2024-12-15 06:20:26.799538] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27b40) on tqpair(0xcbbed0): expected_datao=0, payload_size=8 00:30:06.844 [2024-12-15 06:20:26.799542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799547] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.799551] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.840175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.844 [2024-12-15 06:20:26.840187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.844 [2024-12-15 06:20:26.840191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.844 [2024-12-15 06:20:26.840195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b40) on tqpair=0xcbbed0 00:30:06.844 ===================================================== 00:30:06.844 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:06.844 ===================================================== 00:30:06.844 Controller Capabilities/Features 00:30:06.844 ================================ 00:30:06.844 Vendor ID: 0000 00:30:06.844 Subsystem Vendor ID: 0000 00:30:06.844 Serial Number: .................... 00:30:06.844 Model Number: ........................................ 00:30:06.844 Firmware Version: 25.01 00:30:06.844 Recommended Arb Burst: 0 00:30:06.844 IEEE OUI Identifier: 00 00 00 00:30:06.844 Multi-path I/O 00:30:06.844 May have multiple subsystem ports: No 00:30:06.844 May have multiple controllers: No 00:30:06.844 Associated with SR-IOV VF: No 00:30:06.844 Max Data Transfer Size: 131072 00:30:06.844 Max Number of Namespaces: 0 00:30:06.844 Max Number of I/O Queues: 1024 00:30:06.844 NVMe Specification Version (VS): 1.3 00:30:06.844 NVMe Specification Version (Identify): 1.3 00:30:06.844 Maximum Queue Entries: 128 00:30:06.844 Contiguous Queues Required: Yes 00:30:06.844 Arbitration Mechanisms Supported 00:30:06.844 Weighted Round Robin: Not Supported 00:30:06.844 Vendor Specific: Not Supported 00:30:06.844 Reset Timeout: 15000 ms 00:30:06.844 Doorbell Stride: 4 bytes 00:30:06.844 NVM Subsystem Reset: Not Supported 00:30:06.844 Command Sets Supported 00:30:06.844 NVM Command Set: Supported 00:30:06.844 Boot Partition: Not Supported 00:30:06.844 Memory Page Size Minimum: 4096 bytes 00:30:06.844 Memory Page Size Maximum: 4096 bytes 00:30:06.844 Persistent Memory Region: Not Supported 00:30:06.844 Optional Asynchronous Events Supported 00:30:06.844 Namespace Attribute Notices: Not Supported 00:30:06.844 Firmware Activation Notices: Not Supported 00:30:06.844 ANA Change Notices: Not Supported 00:30:06.844 PLE Aggregate Log Change Notices: Not Supported 00:30:06.844 LBA Status Info Alert Notices: Not Supported 00:30:06.844 EGE Aggregate Log Change Notices: Not Supported 00:30:06.844 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.844 Zone Descriptor Change Notices: Not Supported 00:30:06.844 Discovery Log Change Notices: Supported 00:30:06.844 Controller Attributes 00:30:06.844 128-bit Host Identifier: Not Supported 00:30:06.844 Non-Operational Permissive Mode: Not Supported 00:30:06.844 NVM Sets: Not Supported 00:30:06.844 Read Recovery Levels: Not Supported 00:30:06.844 Endurance Groups: Not Supported 00:30:06.844 Predictable Latency Mode: Not Supported 00:30:06.844 Traffic Based Keep ALive: Not Supported 00:30:06.844 Namespace Granularity: Not Supported 00:30:06.844 SQ Associations: Not Supported 00:30:06.844 UUID List: Not Supported 00:30:06.844 Multi-Domain Subsystem: Not Supported 00:30:06.844 Fixed Capacity Management: Not Supported 00:30:06.844 Variable Capacity Management: Not Supported 00:30:06.844 Delete Endurance Group: Not Supported 00:30:06.844 Delete NVM Set: Not Supported 00:30:06.844 Extended LBA Formats Supported: Not Supported 00:30:06.844 Flexible Data Placement Supported: Not Supported 00:30:06.844 00:30:06.844 Controller Memory Buffer Support 00:30:06.844 ================================ 00:30:06.844 Supported: No 00:30:06.844 00:30:06.844 Persistent Memory Region Support 00:30:06.844 ================================ 00:30:06.844 Supported: No 00:30:06.844 00:30:06.844 Admin Command Set Attributes 00:30:06.844 ============================ 00:30:06.844 Security Send/Receive: Not Supported 00:30:06.844 Format NVM: Not Supported 00:30:06.844 Firmware Activate/Download: Not Supported 00:30:06.844 Namespace Management: Not Supported 00:30:06.844 Device Self-Test: Not Supported 00:30:06.844 Directives: Not Supported 00:30:06.844 NVMe-MI: Not Supported 00:30:06.844 Virtualization Management: Not Supported 00:30:06.844 Doorbell Buffer Config: Not Supported 00:30:06.844 Get LBA Status Capability: Not Supported 00:30:06.844 Command & Feature Lockdown Capability: Not Supported 00:30:06.844 Abort Command Limit: 1 00:30:06.844 Async Event Request Limit: 4 00:30:06.844 Number of Firmware Slots: N/A 00:30:06.844 Firmware Slot 1 Read-Only: N/A 00:30:06.844 Firmware Activation Without Reset: N/A 00:30:06.844 Multiple Update Detection Support: N/A 00:30:06.844 Firmware Update Granularity: No Information Provided 00:30:06.844 Per-Namespace SMART Log: No 00:30:06.844 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.844 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:06.844 Command Effects Log Page: Not Supported 00:30:06.844 Get Log Page Extended Data: Supported 00:30:06.845 Telemetry Log Pages: Not Supported 00:30:06.845 Persistent Event Log Pages: Not Supported 00:30:06.845 Supported Log Pages Log Page: May Support 00:30:06.845 Commands Supported & Effects Log Page: Not Supported 00:30:06.845 Feature Identifiers & Effects Log Page:May Support 00:30:06.845 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.845 Data Area 4 for Telemetry Log: Not Supported 00:30:06.845 Error Log Page Entries Supported: 128 00:30:06.845 Keep Alive: Not Supported 00:30:06.845 00:30:06.845 NVM Command Set Attributes 00:30:06.845 ========================== 00:30:06.845 Submission Queue Entry Size 00:30:06.845 Max: 1 00:30:06.845 Min: 1 00:30:06.845 Completion Queue Entry Size 00:30:06.845 Max: 1 00:30:06.845 Min: 1 00:30:06.845 Number of Namespaces: 0 00:30:06.845 Compare Command: Not Supported 00:30:06.845 Write Uncorrectable Command: Not Supported 00:30:06.845 Dataset Management Command: Not Supported 00:30:06.845 Write Zeroes Command: Not Supported 00:30:06.845 Set Features Save Field: Not Supported 00:30:06.845 Reservations: Not Supported 00:30:06.845 Timestamp: Not Supported 00:30:06.845 Copy: Not Supported 00:30:06.845 Volatile Write Cache: Not Present 00:30:06.845 Atomic Write Unit (Normal): 1 00:30:06.845 Atomic Write Unit (PFail): 1 00:30:06.845 Atomic Compare & Write Unit: 1 00:30:06.845 Fused Compare & Write: Supported 00:30:06.845 Scatter-Gather List 00:30:06.845 SGL Command Set: Supported 00:30:06.845 SGL Keyed: Supported 00:30:06.845 SGL Bit Bucket Descriptor: Not Supported 00:30:06.845 SGL Metadata Pointer: Not Supported 00:30:06.845 Oversized SGL: Not Supported 00:30:06.845 SGL Metadata Address: Not Supported 00:30:06.845 SGL Offset: Supported 00:30:06.845 Transport SGL Data Block: Not Supported 00:30:06.845 Replay Protected Memory Block: Not Supported 00:30:06.845 00:30:06.845 Firmware Slot Information 00:30:06.845 ========================= 00:30:06.845 Active slot: 0 00:30:06.845 00:30:06.845 00:30:06.845 Error Log 00:30:06.845 ========= 00:30:06.845 00:30:06.845 Active Namespaces 00:30:06.845 ================= 00:30:06.845 Discovery Log Page 00:30:06.845 ================== 00:30:06.845 Generation Counter: 2 00:30:06.845 Number of Records: 2 00:30:06.845 Record Format: 0 00:30:06.845 00:30:06.845 Discovery Log Entry 0 00:30:06.845 ---------------------- 00:30:06.845 Transport Type: 3 (TCP) 00:30:06.845 Address Family: 1 (IPv4) 00:30:06.845 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:06.845 Entry Flags: 00:30:06.845 Duplicate Returned Information: 1 00:30:06.845 Explicit Persistent Connection Support for Discovery: 1 00:30:06.845 Transport Requirements: 00:30:06.845 Secure Channel: Not Required 00:30:06.845 Port ID: 0 (0x0000) 00:30:06.845 Controller ID: 65535 (0xffff) 00:30:06.845 Admin Max SQ Size: 128 00:30:06.845 Transport Service Identifier: 4420 00:30:06.845 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:06.845 Transport Address: 10.0.0.2 00:30:06.845 Discovery Log Entry 1 00:30:06.845 ---------------------- 00:30:06.845 Transport Type: 3 (TCP) 00:30:06.845 Address Family: 1 (IPv4) 00:30:06.845 Subsystem Type: 2 (NVM Subsystem) 00:30:06.845 Entry Flags: 00:30:06.845 Duplicate Returned Information: 0 00:30:06.845 Explicit Persistent Connection Support for Discovery: 0 00:30:06.845 Transport Requirements: 00:30:06.845 Secure Channel: Not Required 00:30:06.845 Port ID: 0 (0x0000) 00:30:06.845 Controller ID: 65535 (0xffff) 00:30:06.845 Admin Max SQ Size: 128 00:30:06.845 Transport Service Identifier: 4420 00:30:06.845 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:06.845 Transport Address: 10.0.0.2 [2024-12-15 06:20:26.840279] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:06.845 [2024-12-15 06:20:26.840292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27540) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.845 [2024-12-15 06:20:26.840303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd276c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.845 [2024-12-15 06:20:26.840312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27840) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.845 [2024-12-15 06:20:26.840320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.845 [2024-12-15 06:20:26.840332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.840346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.845 [2024-12-15 06:20:26.840360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.845 [2024-12-15 06:20:26.840420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.845 [2024-12-15 06:20:26.840426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.845 [2024-12-15 06:20:26.840430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.840452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.845 [2024-12-15 06:20:26.840464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.845 [2024-12-15 06:20:26.840576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.845 [2024-12-15 06:20:26.840582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.845 [2024-12-15 06:20:26.840585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840593] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:06.845 [2024-12-15 06:20:26.840598] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:06.845 [2024-12-15 06:20:26.840606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.840618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.845 [2024-12-15 06:20:26.840628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.845 [2024-12-15 06:20:26.840729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.845 [2024-12-15 06:20:26.840736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.845 [2024-12-15 06:20:26.840739] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.840764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.845 [2024-12-15 06:20:26.840773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.845 [2024-12-15 06:20:26.840879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.845 [2024-12-15 06:20:26.840884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.845 [2024-12-15 06:20:26.840887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.840899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.840906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.840912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.845 [2024-12-15 06:20:26.840921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.845 [2024-12-15 06:20:26.840980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.845 [2024-12-15 06:20:26.840986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.845 [2024-12-15 06:20:26.840989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.846999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.845 [2024-12-15 06:20:26.847011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.847015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.845 [2024-12-15 06:20:26.847018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcbbed0) 00:30:06.845 [2024-12-15 06:20:26.847024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.847035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd279c0, cid 3, qid 0 00:30:06.846 [2024-12-15 06:20:26.847218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.847224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.847227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.847230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd279c0) on tqpair=0xcbbed0 00:30:06.846 [2024-12-15 06:20:26.847238] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:30:06.846 00:30:06.846 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:06.846 [2024-12-15 06:20:26.883433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:06.846 [2024-12-15 06:20:26.883473] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116416 ] 00:30:06.846 [2024-12-15 06:20:26.920811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:06.846 [2024-12-15 06:20:26.920852] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:06.846 [2024-12-15 06:20:26.920857] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:06.846 [2024-12-15 06:20:26.920867] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:06.846 [2024-12-15 06:20:26.920874] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:06.846 [2024-12-15 06:20:26.928131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:06.846 [2024-12-15 06:20:26.928159] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x229aed0 0 00:30:06.846 [2024-12-15 06:20:26.928360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:06.846 [2024-12-15 06:20:26.928366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:06.846 [2024-12-15 06:20:26.928370] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:06.846 [2024-12-15 06:20:26.928373] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:06.846 [2024-12-15 06:20:26.928393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.928398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.928401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.928411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:06.846 [2024-12-15 06:20:26.928423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936033] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:06.846 [2024-12-15 06:20:26.936039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:06.846 [2024-12-15 06:20:26.936044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:06.846 [2024-12-15 06:20:26.936054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:06.846 [2024-12-15 06:20:26.936269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:06.846 [2024-12-15 06:20:26.936275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:06.846 [2024-12-15 06:20:26.936382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936656] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:06.846 [2024-12-15 06:20:26.936661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936776] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:06.846 [2024-12-15 06:20:26.936781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.936891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:06.846 [2024-12-15 06:20:26.936899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.846 [2024-12-15 06:20:26.936911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.846 [2024-12-15 06:20:26.936921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.846 [2024-12-15 06:20:26.936978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.846 [2024-12-15 06:20:26.936984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.846 [2024-12-15 06:20:26.936986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.846 [2024-12-15 06:20:26.936990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.846 [2024-12-15 06:20:26.937000] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:06.846 [2024-12-15 06:20:26.937005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:06.846 [2024-12-15 06:20:26.937011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:06.847 [2024-12-15 06:20:26.937018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.847 [2024-12-15 06:20:26.937045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.847 [2024-12-15 06:20:26.937182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.847 [2024-12-15 06:20:26.937188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.847 [2024-12-15 06:20:26.937191] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937196] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=4096, cccid=0 00:30:06.847 [2024-12-15 06:20:26.937201] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306540) on tqpair(0x229aed0): expected_datao=0, payload_size=4096 00:30:06.847 [2024-12-15 06:20:26.937204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937214] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.847 [2024-12-15 06:20:26.937228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.847 [2024-12-15 06:20:26.937231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.847 [2024-12-15 06:20:26.937241] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:06.847 [2024-12-15 06:20:26.937245] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:06.847 [2024-12-15 06:20:26.937249] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:06.847 [2024-12-15 06:20:26.937252] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:06.847 [2024-12-15 06:20:26.937256] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:06.847 [2024-12-15 06:20:26.937260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.847 [2024-12-15 06:20:26.937302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.847 [2024-12-15 06:20:26.937378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.847 [2024-12-15 06:20:26.937384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.847 [2024-12-15 06:20:26.937387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.847 [2024-12-15 06:20:26.937395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.847 [2024-12-15 06:20:26.937412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.847 [2024-12-15 06:20:26.937428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.847 [2024-12-15 06:20:26.937446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.847 [2024-12-15 06:20:26.937461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.847 [2024-12-15 06:20:26.937496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306540, cid 0, qid 0 00:30:06.847 [2024-12-15 06:20:26.937501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23066c0, cid 1, qid 0 00:30:06.847 [2024-12-15 06:20:26.937505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306840, cid 2, qid 0 00:30:06.847 [2024-12-15 06:20:26.937509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23069c0, cid 3, qid 0 00:30:06.847 [2024-12-15 06:20:26.937513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.847 [2024-12-15 06:20:26.937631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.847 [2024-12-15 06:20:26.937637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.847 [2024-12-15 06:20:26.937640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.847 [2024-12-15 06:20:26.937648] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:06.847 [2024-12-15 06:20:26.937652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.847 [2024-12-15 06:20:26.937695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.847 [2024-12-15 06:20:26.937781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.847 [2024-12-15 06:20:26.937787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.847 [2024-12-15 06:20:26.937791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.847 [2024-12-15 06:20:26.937844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.937860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.937868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.847 [2024-12-15 06:20:26.937878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.847 [2024-12-15 06:20:26.937951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.847 [2024-12-15 06:20:26.937957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.847 [2024-12-15 06:20:26.937960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.937963] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=4096, cccid=4 00:30:06.847 [2024-12-15 06:20:26.937967] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306b40) on tqpair(0x229aed0): expected_datao=0, payload_size=4096 00:30:06.847 [2024-12-15 06:20:26.937971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.938008] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.938012] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.938082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.847 [2024-12-15 06:20:26.938087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.847 [2024-12-15 06:20:26.938090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.938094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.847 [2024-12-15 06:20:26.938103] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:06.847 [2024-12-15 06:20:26.938113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.938121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:06.847 [2024-12-15 06:20:26.938127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.847 [2024-12-15 06:20:26.938130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.847 [2024-12-15 06:20:26.938136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.847 [2024-12-15 06:20:26.938146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.848 [2024-12-15 06:20:26.938239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.848 [2024-12-15 06:20:26.938245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.848 [2024-12-15 06:20:26.938248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938251] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=4096, cccid=4 00:30:06.848 [2024-12-15 06:20:26.938255] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306b40) on tqpair(0x229aed0): expected_datao=0, payload_size=4096 00:30:06.848 [2024-12-15 06:20:26.938258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938264] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.938290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.938293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.938307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.938330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.938340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.848 [2024-12-15 06:20:26.938405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.848 [2024-12-15 06:20:26.938411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.848 [2024-12-15 06:20:26.938414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938418] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=4096, cccid=4 00:30:06.848 [2024-12-15 06:20:26.938421] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306b40) on tqpair(0x229aed0): expected_datao=0, payload_size=4096 00:30:06.848 [2024-12-15 06:20:26.938425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938445] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.938541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.938544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.938553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938587] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:06.848 [2024-12-15 06:20:26.938591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:06.848 [2024-12-15 06:20:26.938596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:06.848 [2024-12-15 06:20:26.938609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.938618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.938624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.938635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.848 [2024-12-15 06:20:26.938648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.848 [2024-12-15 06:20:26.938652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306cc0, cid 5, qid 0 00:30:06.848 [2024-12-15 06:20:26.938726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.938731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.938734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.938743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.938748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.938751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306cc0) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.938762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.938771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.938780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306cc0, cid 5, qid 0 00:30:06.848 [2024-12-15 06:20:26.938869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.938875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.938878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306cc0) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.938888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.938892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.938897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.938907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306cc0, cid 5, qid 0 00:30:06.848 [2024-12-15 06:20:26.939021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.939028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.939031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306cc0) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.939041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939045] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.939052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.939062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306cc0, cid 5, qid 0 00:30:06.848 [2024-12-15 06:20:26.939121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.848 [2024-12-15 06:20:26.939126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.848 [2024-12-15 06:20:26.939130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306cc0) on tqpair=0x229aed0 00:30:06.848 [2024-12-15 06:20:26.939144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.939153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.939159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.939168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.939174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.939182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.939188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x229aed0) 00:30:06.848 [2024-12-15 06:20:26.939197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.848 [2024-12-15 06:20:26.939207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306cc0, cid 5, qid 0 00:30:06.848 [2024-12-15 06:20:26.939212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306b40, cid 4, qid 0 00:30:06.848 [2024-12-15 06:20:26.939216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306e40, cid 6, qid 0 00:30:06.848 [2024-12-15 06:20:26.939220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306fc0, cid 7, qid 0 00:30:06.848 [2024-12-15 06:20:26.939357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.848 [2024-12-15 06:20:26.939363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.848 [2024-12-15 06:20:26.939366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.848 [2024-12-15 06:20:26.939370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=8192, cccid=5 00:30:06.848 [2024-12-15 06:20:26.939373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306cc0) on tqpair(0x229aed0): expected_datao=0, payload_size=8192 00:30:06.848 [2024-12-15 06:20:26.939377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939399] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939403] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.849 [2024-12-15 06:20:26.939413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.849 [2024-12-15 06:20:26.939416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939419] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=512, cccid=4 00:30:06.849 [2024-12-15 06:20:26.939426] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306b40) on tqpair(0x229aed0): expected_datao=0, payload_size=512 00:30:06.849 [2024-12-15 06:20:26.939430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939438] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.849 [2024-12-15 06:20:26.939448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.849 [2024-12-15 06:20:26.939451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=512, cccid=6 00:30:06.849 [2024-12-15 06:20:26.939458] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306e40) on tqpair(0x229aed0): expected_datao=0, payload_size=512 00:30:06.849 [2024-12-15 06:20:26.939461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939467] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939470] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:06.849 [2024-12-15 06:20:26.939479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:06.849 [2024-12-15 06:20:26.939482] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939486] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x229aed0): datao=0, datal=4096, cccid=7 00:30:06.849 [2024-12-15 06:20:26.939489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2306fc0) on tqpair(0x229aed0): expected_datao=0, payload_size=4096 00:30:06.849 [2024-12-15 06:20:26.939493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939499] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939502] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.849 [2024-12-15 06:20:26.939514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.849 [2024-12-15 06:20:26.939517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306cc0) on tqpair=0x229aed0 00:30:06.849 [2024-12-15 06:20:26.939530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.849 [2024-12-15 06:20:26.939535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.849 [2024-12-15 06:20:26.939538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306b40) on tqpair=0x229aed0 00:30:06.849 [2024-12-15 06:20:26.939549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.849 [2024-12-15 06:20:26.939554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.849 [2024-12-15 06:20:26.939557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306e40) on tqpair=0x229aed0 00:30:06.849 [2024-12-15 06:20:26.939566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.849 [2024-12-15 06:20:26.939571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.849 [2024-12-15 06:20:26.939574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.849 [2024-12-15 06:20:26.939577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306fc0) on tqpair=0x229aed0 00:30:06.849 ===================================================== 00:30:06.849 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.849 ===================================================== 00:30:06.849 Controller Capabilities/Features 00:30:06.849 ================================ 00:30:06.849 Vendor ID: 8086 00:30:06.849 Subsystem Vendor ID: 8086 00:30:06.849 Serial Number: SPDK00000000000001 00:30:06.849 Model Number: SPDK bdev Controller 00:30:06.849 Firmware Version: 25.01 00:30:06.849 Recommended Arb Burst: 6 00:30:06.849 IEEE OUI Identifier: e4 d2 5c 00:30:06.849 Multi-path I/O 00:30:06.849 May have multiple subsystem ports: Yes 00:30:06.849 May have multiple controllers: Yes 00:30:06.849 Associated with SR-IOV VF: No 00:30:06.849 Max Data Transfer Size: 131072 00:30:06.849 Max Number of Namespaces: 32 00:30:06.849 Max Number of I/O Queues: 127 00:30:06.849 NVMe Specification Version (VS): 1.3 00:30:06.849 NVMe Specification Version (Identify): 1.3 00:30:06.849 Maximum Queue Entries: 128 00:30:06.849 Contiguous Queues Required: Yes 00:30:06.849 Arbitration Mechanisms Supported 00:30:06.849 Weighted Round Robin: Not Supported 00:30:06.849 Vendor Specific: Not Supported 00:30:06.849 Reset Timeout: 15000 ms 00:30:06.849 Doorbell Stride: 4 bytes 00:30:06.849 NVM Subsystem Reset: Not Supported 00:30:06.849 Command Sets Supported 00:30:06.849 NVM Command Set: Supported 00:30:06.849 Boot Partition: Not Supported 00:30:06.849 Memory Page Size Minimum: 4096 bytes 00:30:06.849 Memory Page Size Maximum: 4096 bytes 00:30:06.849 Persistent Memory Region: Not Supported 00:30:06.849 Optional Asynchronous Events Supported 00:30:06.849 Namespace Attribute Notices: Supported 00:30:06.849 Firmware Activation Notices: Not Supported 00:30:06.849 ANA Change Notices: Not Supported 00:30:06.849 PLE Aggregate Log Change Notices: Not Supported 00:30:06.849 LBA Status Info Alert Notices: Not Supported 00:30:06.849 EGE Aggregate Log Change Notices: Not Supported 00:30:06.849 Normal NVM Subsystem Shutdown event: Not Supported 00:30:06.849 Zone Descriptor Change Notices: Not Supported 00:30:06.849 Discovery Log Change Notices: Not Supported 00:30:06.849 Controller Attributes 00:30:06.849 128-bit Host Identifier: Supported 00:30:06.849 Non-Operational Permissive Mode: Not Supported 00:30:06.849 NVM Sets: Not Supported 00:30:06.849 Read Recovery Levels: Not Supported 00:30:06.849 Endurance Groups: Not Supported 00:30:06.849 Predictable Latency Mode: Not Supported 00:30:06.849 Traffic Based Keep ALive: Not Supported 00:30:06.849 Namespace Granularity: Not Supported 00:30:06.849 SQ Associations: Not Supported 00:30:06.849 UUID List: Not Supported 00:30:06.849 Multi-Domain Subsystem: Not Supported 00:30:06.849 Fixed Capacity Management: Not Supported 00:30:06.849 Variable Capacity Management: Not Supported 00:30:06.849 Delete Endurance Group: Not Supported 00:30:06.849 Delete NVM Set: Not Supported 00:30:06.849 Extended LBA Formats Supported: Not Supported 00:30:06.849 Flexible Data Placement Supported: Not Supported 00:30:06.849 00:30:06.849 Controller Memory Buffer Support 00:30:06.849 ================================ 00:30:06.849 Supported: No 00:30:06.849 00:30:06.849 Persistent Memory Region Support 00:30:06.849 ================================ 00:30:06.849 Supported: No 00:30:06.849 00:30:06.849 Admin Command Set Attributes 00:30:06.849 ============================ 00:30:06.849 Security Send/Receive: Not Supported 00:30:06.849 Format NVM: Not Supported 00:30:06.849 Firmware Activate/Download: Not Supported 00:30:06.849 Namespace Management: Not Supported 00:30:06.849 Device Self-Test: Not Supported 00:30:06.849 Directives: Not Supported 00:30:06.849 NVMe-MI: Not Supported 00:30:06.849 Virtualization Management: Not Supported 00:30:06.849 Doorbell Buffer Config: Not Supported 00:30:06.849 Get LBA Status Capability: Not Supported 00:30:06.849 Command & Feature Lockdown Capability: Not Supported 00:30:06.849 Abort Command Limit: 4 00:30:06.849 Async Event Request Limit: 4 00:30:06.849 Number of Firmware Slots: N/A 00:30:06.849 Firmware Slot 1 Read-Only: N/A 00:30:06.849 Firmware Activation Without Reset: N/A 00:30:06.849 Multiple Update Detection Support: N/A 00:30:06.849 Firmware Update Granularity: No Information Provided 00:30:06.849 Per-Namespace SMART Log: No 00:30:06.849 Asymmetric Namespace Access Log Page: Not Supported 00:30:06.849 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:06.849 Command Effects Log Page: Supported 00:30:06.849 Get Log Page Extended Data: Supported 00:30:06.849 Telemetry Log Pages: Not Supported 00:30:06.849 Persistent Event Log Pages: Not Supported 00:30:06.850 Supported Log Pages Log Page: May Support 00:30:06.850 Commands Supported & Effects Log Page: Not Supported 00:30:06.850 Feature Identifiers & Effects Log Page:May Support 00:30:06.850 NVMe-MI Commands & Effects Log Page: May Support 00:30:06.850 Data Area 4 for Telemetry Log: Not Supported 00:30:06.850 Error Log Page Entries Supported: 128 00:30:06.850 Keep Alive: Supported 00:30:06.850 Keep Alive Granularity: 10000 ms 00:30:06.850 00:30:06.850 NVM Command Set Attributes 00:30:06.850 ========================== 00:30:06.850 Submission Queue Entry Size 00:30:06.850 Max: 64 00:30:06.850 Min: 64 00:30:06.850 Completion Queue Entry Size 00:30:06.850 Max: 16 00:30:06.850 Min: 16 00:30:06.850 Number of Namespaces: 32 00:30:06.850 Compare Command: Supported 00:30:06.850 Write Uncorrectable Command: Not Supported 00:30:06.850 Dataset Management Command: Supported 00:30:06.850 Write Zeroes Command: Supported 00:30:06.850 Set Features Save Field: Not Supported 00:30:06.850 Reservations: Supported 00:30:06.850 Timestamp: Not Supported 00:30:06.850 Copy: Supported 00:30:06.850 Volatile Write Cache: Present 00:30:06.850 Atomic Write Unit (Normal): 1 00:30:06.850 Atomic Write Unit (PFail): 1 00:30:06.850 Atomic Compare & Write Unit: 1 00:30:06.850 Fused Compare & Write: Supported 00:30:06.850 Scatter-Gather List 00:30:06.850 SGL Command Set: Supported 00:30:06.850 SGL Keyed: Supported 00:30:06.850 SGL Bit Bucket Descriptor: Not Supported 00:30:06.850 SGL Metadata Pointer: Not Supported 00:30:06.850 Oversized SGL: Not Supported 00:30:06.850 SGL Metadata Address: Not Supported 00:30:06.850 SGL Offset: Supported 00:30:06.850 Transport SGL Data Block: Not Supported 00:30:06.850 Replay Protected Memory Block: Not Supported 00:30:06.850 00:30:06.850 Firmware Slot Information 00:30:06.850 ========================= 00:30:06.850 Active slot: 1 00:30:06.850 Slot 1 Firmware Revision: 25.01 00:30:06.850 00:30:06.850 00:30:06.850 Commands Supported and Effects 00:30:06.850 ============================== 00:30:06.850 Admin Commands 00:30:06.850 -------------- 00:30:06.850 Get Log Page (02h): Supported 00:30:06.850 Identify (06h): Supported 00:30:06.850 Abort (08h): Supported 00:30:06.850 Set Features (09h): Supported 00:30:06.850 Get Features (0Ah): Supported 00:30:06.850 Asynchronous Event Request (0Ch): Supported 00:30:06.850 Keep Alive (18h): Supported 00:30:06.850 I/O Commands 00:30:06.850 ------------ 00:30:06.850 Flush (00h): Supported LBA-Change 00:30:06.850 Write (01h): Supported LBA-Change 00:30:06.850 Read (02h): Supported 00:30:06.850 Compare (05h): Supported 00:30:06.850 Write Zeroes (08h): Supported LBA-Change 00:30:06.850 Dataset Management (09h): Supported LBA-Change 00:30:06.850 Copy (19h): Supported LBA-Change 00:30:06.850 00:30:06.850 Error Log 00:30:06.850 ========= 00:30:06.850 00:30:06.850 Arbitration 00:30:06.850 =========== 00:30:06.850 Arbitration Burst: 1 00:30:06.850 00:30:06.850 Power Management 00:30:06.850 ================ 00:30:06.850 Number of Power States: 1 00:30:06.850 Current Power State: Power State #0 00:30:06.850 Power State #0: 00:30:06.850 Max Power: 0.00 W 00:30:06.850 Non-Operational State: Operational 00:30:06.850 Entry Latency: Not Reported 00:30:06.850 Exit Latency: Not Reported 00:30:06.850 Relative Read Throughput: 0 00:30:06.850 Relative Read Latency: 0 00:30:06.850 Relative Write Throughput: 0 00:30:06.850 Relative Write Latency: 0 00:30:06.850 Idle Power: Not Reported 00:30:06.850 Active Power: Not Reported 00:30:06.850 Non-Operational Permissive Mode: Not Supported 00:30:06.850 00:30:06.850 Health Information 00:30:06.850 ================== 00:30:06.850 Critical Warnings: 00:30:06.850 Available Spare Space: OK 00:30:06.850 Temperature: OK 00:30:06.850 Device Reliability: OK 00:30:06.850 Read Only: No 00:30:06.850 Volatile Memory Backup: OK 00:30:06.850 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:06.850 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:06.850 Available Spare: 0% 00:30:06.850 Available Spare Threshold: 0% 00:30:06.850 Life Percentage Used:[2024-12-15 06:20:26.939657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x229aed0) 00:30:06.850 [2024-12-15 06:20:26.939668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.850 [2024-12-15 06:20:26.939680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2306fc0, cid 7, qid 0 00:30:06.850 [2024-12-15 06:20:26.939756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.850 [2024-12-15 06:20:26.939762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.850 [2024-12-15 06:20:26.939765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306fc0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939794] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:06.850 [2024-12-15 06:20:26.939803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306540) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.850 [2024-12-15 06:20:26.939813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23066c0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.850 [2024-12-15 06:20:26.939821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2306840) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.850 [2024-12-15 06:20:26.939829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23069c0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.850 [2024-12-15 06:20:26.939840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229aed0) 00:30:06.850 [2024-12-15 06:20:26.939852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.850 [2024-12-15 06:20:26.939863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23069c0, cid 3, qid 0 00:30:06.850 [2024-12-15 06:20:26.939925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.850 [2024-12-15 06:20:26.939931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.850 [2024-12-15 06:20:26.939934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23069c0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.939942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.939949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229aed0) 00:30:06.850 [2024-12-15 06:20:26.939954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.850 [2024-12-15 06:20:26.939967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23069c0, cid 3, qid 0 00:30:06.850 [2024-12-15 06:20:26.944002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.850 [2024-12-15 06:20:26.944010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.850 [2024-12-15 06:20:26.944013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.944016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23069c0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.944020] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:06.850 [2024-12-15 06:20:26.944028] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:06.850 [2024-12-15 06:20:26.944038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.944041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.944044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x229aed0) 00:30:06.850 [2024-12-15 06:20:26.944050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.850 [2024-12-15 06:20:26.944061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23069c0, cid 3, qid 0 00:30:06.850 [2024-12-15 06:20:26.944245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:06.850 [2024-12-15 06:20:26.944251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:06.850 [2024-12-15 06:20:26.944254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:06.850 [2024-12-15 06:20:26.944257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23069c0) on tqpair=0x229aed0 00:30:06.850 [2024-12-15 06:20:26.944264] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:06.850 0% 00:30:06.850 Data Units Read: 0 00:30:06.850 Data Units Written: 0 00:30:06.850 Host Read Commands: 0 00:30:06.850 Host Write Commands: 0 00:30:06.850 Controller Busy Time: 0 minutes 00:30:06.850 Power Cycles: 0 00:30:06.850 Power On Hours: 0 hours 00:30:06.850 Unsafe Shutdowns: 0 00:30:06.850 Unrecoverable Media Errors: 0 00:30:06.850 Lifetime Error Log Entries: 0 00:30:06.850 Warning Temperature Time: 0 minutes 00:30:06.850 Critical Temperature Time: 0 minutes 00:30:06.850 00:30:06.850 Number of Queues 00:30:06.850 ================ 00:30:06.850 Number of I/O Submission Queues: 127 00:30:06.850 Number of I/O Completion Queues: 127 00:30:06.850 00:30:06.850 Active Namespaces 00:30:06.850 ================= 00:30:06.850 Namespace ID:1 00:30:06.850 Error Recovery Timeout: Unlimited 00:30:06.851 Command Set Identifier: NVM (00h) 00:30:06.851 Deallocate: Supported 00:30:06.851 Deallocated/Unwritten Error: Not Supported 00:30:06.851 Deallocated Read Value: Unknown 00:30:06.851 Deallocate in Write Zeroes: Not Supported 00:30:06.851 Deallocated Guard Field: 0xFFFF 00:30:06.851 Flush: Supported 00:30:06.851 Reservation: Supported 00:30:06.851 Namespace Sharing Capabilities: Multiple Controllers 00:30:06.851 Size (in LBAs): 131072 (0GiB) 00:30:06.851 Capacity (in LBAs): 131072 (0GiB) 00:30:06.851 Utilization (in LBAs): 131072 (0GiB) 00:30:06.851 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:06.851 EUI64: ABCDEF0123456789 00:30:06.851 UUID: 9d20f355-c7f9-45bd-bfbf-1835c1c07db9 00:30:06.851 Thin Provisioning: Not Supported 00:30:06.851 Per-NS Atomic Units: Yes 00:30:06.851 Atomic Boundary Size (Normal): 0 00:30:06.851 Atomic Boundary Size (PFail): 0 00:30:06.851 Atomic Boundary Offset: 0 00:30:06.851 Maximum Single Source Range Length: 65535 00:30:06.851 Maximum Copy Length: 65535 00:30:06.851 Maximum Source Range Count: 1 00:30:06.851 NGUID/EUI64 Never Reused: No 00:30:06.851 Namespace Write Protected: No 00:30:06.851 Number of LBA Formats: 1 00:30:06.851 Current LBA Format: LBA Format #00 00:30:06.851 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:06.851 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.851 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:07.109 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.109 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:07.109 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.109 06:20:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.109 rmmod nvme_tcp 00:30:07.109 rmmod nvme_fabrics 00:30:07.109 rmmod nvme_keyring 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1116388 ']' 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1116388 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1116388 ']' 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1116388 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116388 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116388' 00:30:07.109 killing process with pid 1116388 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1116388 00:30:07.109 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1116388 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.368 06:20:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.272 00:30:09.272 real 0m9.206s 00:30:09.272 user 0m5.006s 00:30:09.272 sys 0m4.823s 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:09.272 ************************************ 00:30:09.272 END TEST nvmf_identify 00:30:09.272 ************************************ 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.272 06:20:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.272 ************************************ 00:30:09.272 START TEST nvmf_perf 00:30:09.272 ************************************ 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:09.531 * Looking for test storage... 00:30:09.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.531 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.532 --rc genhtml_branch_coverage=1 00:30:09.532 --rc genhtml_function_coverage=1 00:30:09.532 --rc genhtml_legend=1 00:30:09.532 --rc geninfo_all_blocks=1 00:30:09.532 --rc geninfo_unexecuted_blocks=1 00:30:09.532 00:30:09.532 ' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.532 --rc genhtml_branch_coverage=1 00:30:09.532 --rc genhtml_function_coverage=1 00:30:09.532 --rc genhtml_legend=1 00:30:09.532 --rc geninfo_all_blocks=1 00:30:09.532 --rc geninfo_unexecuted_blocks=1 00:30:09.532 00:30:09.532 ' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.532 --rc genhtml_branch_coverage=1 00:30:09.532 --rc genhtml_function_coverage=1 00:30:09.532 --rc genhtml_legend=1 00:30:09.532 --rc geninfo_all_blocks=1 00:30:09.532 --rc geninfo_unexecuted_blocks=1 00:30:09.532 00:30:09.532 ' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.532 --rc genhtml_branch_coverage=1 00:30:09.532 --rc genhtml_function_coverage=1 00:30:09.532 --rc genhtml_legend=1 00:30:09.532 --rc geninfo_all_blocks=1 00:30:09.532 --rc geninfo_unexecuted_blocks=1 00:30:09.532 00:30:09.532 ' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:09.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.532 06:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:16.099 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:16.099 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:16.099 Found net devices under 0000:af:00.0: cvl_0_0 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.099 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:16.100 Found net devices under 0000:af:00.1: cvl_0_1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:30:16.100 00:30:16.100 --- 10.0.0.2 ping statistics --- 00:30:16.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.100 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:30:16.100 00:30:16.100 --- 10.0.0.1 ping statistics --- 00:30:16.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.100 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1119889 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1119889 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1119889 ']' 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 [2024-12-15 06:20:35.557772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:16.100 [2024-12-15 06:20:35.557815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.100 [2024-12-15 06:20:35.620650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.100 [2024-12-15 06:20:35.643818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.100 [2024-12-15 06:20:35.643859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.100 [2024-12-15 06:20:35.643865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.100 [2024-12-15 06:20:35.643871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.100 [2024-12-15 06:20:35.643876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.100 [2024-12-15 06:20:35.647026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.100 [2024-12-15 06:20:35.647064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.100 [2024-12-15 06:20:35.647197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.100 [2024-12-15 06:20:35.647198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:16.100 06:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:19.381 06:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:19.381 06:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:19.381 [2024-12-15 06:20:39.419985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.381 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.638 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:19.639 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.897 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:19.897 06:20:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:20.155 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.155 [2024-12-15 06:20:40.227073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.155 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.413 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:20.413 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:20.413 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:20.413 06:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:21.786 Initializing NVMe Controllers 00:30:21.786 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:21.786 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:21.786 Initialization complete. Launching workers. 00:30:21.786 ======================================================== 00:30:21.786 Latency(us) 00:30:21.786 Device Information : IOPS MiB/s Average min max 00:30:21.786 PCIE (0000:5e:00.0) NSID 1 from core 0: 100123.00 391.11 319.88 38.94 4235.92 00:30:21.786 ======================================================== 00:30:21.786 Total : 100123.00 391.11 319.88 38.94 4235.92 00:30:21.786 00:30:21.786 06:20:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.158 Initializing NVMe Controllers 00:30:23.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.158 Initialization complete. Launching workers. 00:30:23.158 ======================================================== 00:30:23.158 Latency(us) 00:30:23.158 Device Information : IOPS MiB/s Average min max 00:30:23.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 181.36 0.71 5609.43 119.72 44676.32 00:30:23.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.82 0.20 20444.27 5986.46 47903.08 00:30:23.158 ======================================================== 00:30:23.159 Total : 232.18 0.91 8856.54 119.72 47903.08 00:30:23.159 00:30:23.159 06:20:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:24.531 Initializing NVMe Controllers 00:30:24.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:24.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:24.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:24.531 Initialization complete. Launching workers. 00:30:24.531 ======================================================== 00:30:24.531 Latency(us) 00:30:24.531 Device Information : IOPS MiB/s Average min max 00:30:24.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11235.35 43.89 2850.59 473.62 45147.45 00:30:24.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3784.97 14.79 8518.20 4382.54 47842.94 00:30:24.531 ======================================================== 00:30:24.531 Total : 15020.32 58.67 4278.77 473.62 47842.94 00:30:24.531 00:30:24.531 06:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:24.531 06:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:24.531 06:20:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.059 Initializing NVMe Controllers 00:30:27.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.059 Controller IO queue size 128, less than required. 00:30:27.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.059 Controller IO queue size 128, less than required. 00:30:27.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:27.059 Initialization complete. Launching workers. 00:30:27.059 ======================================================== 00:30:27.059 Latency(us) 00:30:27.059 Device Information : IOPS MiB/s Average min max 00:30:27.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1757.24 439.31 73485.87 40378.83 115807.47 00:30:27.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.53 151.88 219258.15 80006.00 339021.96 00:30:27.059 ======================================================== 00:30:27.059 Total : 2364.76 591.19 110935.91 40378.83 339021.96 00:30:27.059 00:30:27.059 06:20:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:27.317 No valid NVMe controllers or AIO or URING devices found 00:30:27.317 Initializing NVMe Controllers 00:30:27.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.317 Controller IO queue size 128, less than required. 00:30:27.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.317 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:27.317 Controller IO queue size 128, less than required. 00:30:27.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.317 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:27.317 WARNING: Some requested NVMe devices were skipped 00:30:27.317 06:20:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:29.845 Initializing NVMe Controllers 00:30:29.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.845 Controller IO queue size 128, less than required. 00:30:29.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.845 Controller IO queue size 128, less than required. 00:30:29.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:29.845 Initialization complete. Launching workers. 00:30:29.845 00:30:29.845 ==================== 00:30:29.845 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:29.845 TCP transport: 00:30:29.845 polls: 10812 00:30:29.845 idle_polls: 7578 00:30:29.845 sock_completions: 3234 00:30:29.845 nvme_completions: 6431 00:30:29.845 submitted_requests: 9766 00:30:29.845 queued_requests: 1 00:30:29.845 00:30:29.845 ==================== 00:30:29.845 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:29.845 TCP transport: 00:30:29.845 polls: 14925 00:30:29.845 idle_polls: 11353 00:30:29.845 sock_completions: 3572 00:30:29.845 nvme_completions: 6523 00:30:29.845 submitted_requests: 9812 00:30:29.845 queued_requests: 1 00:30:29.845 ======================================================== 00:30:29.845 Latency(us) 00:30:29.845 Device Information : IOPS MiB/s Average min max 00:30:29.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1605.71 401.43 80954.63 49028.82 128649.60 00:30:29.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1628.68 407.17 79458.39 40317.69 120536.57 00:30:29.845 ======================================================== 00:30:29.845 Total : 3234.39 808.60 80201.20 40317.69 128649.60 00:30:29.845 00:30:29.845 06:20:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:29.846 06:20:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.103 06:20:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:30.104 06:20:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:30.104 06:20:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=835cb608-3a0e-455a-9248-8e84bcfaeea4 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 835cb608-3a0e-455a-9248-8e84bcfaeea4 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=835cb608-3a0e-455a-9248-8e84bcfaeea4 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:33.381 { 00:30:33.381 "uuid": "835cb608-3a0e-455a-9248-8e84bcfaeea4", 00:30:33.381 "name": "lvs_0", 00:30:33.381 "base_bdev": "Nvme0n1", 00:30:33.381 "total_data_clusters": 238234, 00:30:33.381 "free_clusters": 238234, 00:30:33.381 "block_size": 512, 00:30:33.381 "cluster_size": 4194304 00:30:33.381 } 00:30:33.381 ]' 00:30:33.381 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="835cb608-3a0e-455a-9248-8e84bcfaeea4") .free_clusters' 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="835cb608-3a0e-455a-9248-8e84bcfaeea4") .cluster_size' 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:33.639 952936 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:33.639 06:20:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 835cb608-3a0e-455a-9248-8e84bcfaeea4 lbd_0 20480 00:30:34.204 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=daede24d-5e51-42b1-8b6d-267c14d1beb1 00:30:34.204 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore daede24d-5e51-42b1-8b6d-267c14d1beb1 lvs_n_0 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=272e1e97-ed20-48c3-a33f-c4a086e76800 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 272e1e97-ed20-48c3-a33f-c4a086e76800 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=272e1e97-ed20-48c3-a33f-c4a086e76800 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:34.769 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:35.028 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:35.028 { 00:30:35.028 "uuid": "835cb608-3a0e-455a-9248-8e84bcfaeea4", 00:30:35.028 "name": "lvs_0", 00:30:35.028 "base_bdev": "Nvme0n1", 00:30:35.028 "total_data_clusters": 238234, 00:30:35.028 "free_clusters": 233114, 00:30:35.028 "block_size": 512, 00:30:35.028 "cluster_size": 4194304 00:30:35.028 }, 00:30:35.028 { 00:30:35.028 "uuid": "272e1e97-ed20-48c3-a33f-c4a086e76800", 00:30:35.028 "name": "lvs_n_0", 00:30:35.028 "base_bdev": "daede24d-5e51-42b1-8b6d-267c14d1beb1", 00:30:35.028 "total_data_clusters": 5114, 00:30:35.028 "free_clusters": 5114, 00:30:35.028 "block_size": 512, 00:30:35.028 "cluster_size": 4194304 00:30:35.028 } 00:30:35.028 ]' 00:30:35.028 06:20:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="272e1e97-ed20-48c3-a33f-c4a086e76800") .free_clusters' 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="272e1e97-ed20-48c3-a33f-c4a086e76800") .cluster_size' 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:35.028 20456 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:35.028 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 272e1e97-ed20-48c3-a33f-c4a086e76800 lbd_nest_0 20456 00:30:35.285 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a43a172e-34e5-48b0-96f0-a2ac29502b36 00:30:35.285 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.542 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:35.542 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a43a172e-34e5-48b0-96f0-a2ac29502b36 00:30:35.542 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.800 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:35.800 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:35.800 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:35.800 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:35.800 06:20:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.005 Initializing NVMe Controllers 00:30:48.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.005 Initialization complete. Launching workers. 00:30:48.005 ======================================================== 00:30:48.005 Latency(us) 00:30:48.005 Device Information : IOPS MiB/s Average min max 00:30:48.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.70 0.02 22424.78 125.11 44899.48 00:30:48.005 ======================================================== 00:30:48.005 Total : 44.70 0.02 22424.78 125.11 44899.48 00:30:48.005 00:30:48.005 06:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:48.005 06:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.969 Initializing NVMe Controllers 00:30:57.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.969 Initialization complete. Launching workers. 00:30:57.969 ======================================================== 00:30:57.969 Latency(us) 00:30:57.969 Device Information : IOPS MiB/s Average min max 00:30:57.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.98 7.75 16146.76 3991.69 48041.70 00:30:57.969 ======================================================== 00:30:57.969 Total : 61.98 7.75 16146.76 3991.69 48041.70 00:30:57.969 00:30:57.969 06:21:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.969 06:21:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.969 06:21:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.933 Initializing NVMe Controllers 00:31:07.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.933 Initialization complete. Launching workers. 00:31:07.933 ======================================================== 00:31:07.933 Latency(us) 00:31:07.933 Device Information : IOPS MiB/s Average min max 00:31:07.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8611.12 4.20 3715.90 237.19 8106.96 00:31:07.933 ======================================================== 00:31:07.933 Total : 8611.12 4.20 3715.90 237.19 8106.96 00:31:07.933 00:31:07.933 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:07.933 06:21:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.901 Initializing NVMe Controllers 00:31:17.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.901 Initialization complete. Launching workers. 00:31:17.901 ======================================================== 00:31:17.901 Latency(us) 00:31:17.901 Device Information : IOPS MiB/s Average min max 00:31:17.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4425.70 553.21 7234.27 598.64 18856.00 00:31:17.901 ======================================================== 00:31:17.901 Total : 4425.70 553.21 7234.27 598.64 18856.00 00:31:17.901 00:31:17.901 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:17.901 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:17.901 06:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.869 Initializing NVMe Controllers 00:31:27.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.869 Controller IO queue size 128, less than required. 00:31:27.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.869 Initialization complete. Launching workers. 00:31:27.869 ======================================================== 00:31:27.869 Latency(us) 00:31:27.869 Device Information : IOPS MiB/s Average min max 00:31:27.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15826.17 7.73 8087.74 1436.69 23063.42 00:31:27.869 ======================================================== 00:31:27.869 Total : 15826.17 7.73 8087.74 1436.69 23063.42 00:31:27.869 00:31:27.869 06:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:27.869 06:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.160 Initializing NVMe Controllers 00:31:40.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.160 Controller IO queue size 128, less than required. 00:31:40.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:40.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.160 Initialization complete. Launching workers. 00:31:40.160 ======================================================== 00:31:40.160 Latency(us) 00:31:40.160 Device Information : IOPS MiB/s Average min max 00:31:40.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.01 150.75 106843.86 23477.74 222931.16 00:31:40.160 ======================================================== 00:31:40.160 Total : 1206.01 150.75 106843.86 23477.74 222931.16 00:31:40.160 00:31:40.160 06:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.160 06:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a43a172e-34e5-48b0-96f0-a2ac29502b36 00:31:40.160 06:21:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete daede24d-5e51-42b1-8b6d-267c14d1beb1 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.160 rmmod nvme_tcp 00:31:40.160 rmmod nvme_fabrics 00:31:40.160 rmmod nvme_keyring 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1119889 ']' 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1119889 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1119889 ']' 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1119889 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119889 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119889' 00:31:40.160 killing process with pid 1119889 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1119889 00:31:40.160 06:21:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1119889 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.095 06:22:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:43.633 00:31:43.633 real 1m33.864s 00:31:43.633 user 5m34.933s 00:31:43.633 sys 0m17.132s 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.633 ************************************ 00:31:43.633 END TEST nvmf_perf 00:31:43.633 ************************************ 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.633 ************************************ 00:31:43.633 START TEST nvmf_fio_host 00:31:43.633 ************************************ 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:43.633 * Looking for test storage... 00:31:43.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.633 --rc genhtml_branch_coverage=1 00:31:43.633 --rc genhtml_function_coverage=1 00:31:43.633 --rc genhtml_legend=1 00:31:43.633 --rc geninfo_all_blocks=1 00:31:43.633 --rc geninfo_unexecuted_blocks=1 00:31:43.633 00:31:43.633 ' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.633 --rc genhtml_branch_coverage=1 00:31:43.633 --rc genhtml_function_coverage=1 00:31:43.633 --rc genhtml_legend=1 00:31:43.633 --rc geninfo_all_blocks=1 00:31:43.633 --rc geninfo_unexecuted_blocks=1 00:31:43.633 00:31:43.633 ' 00:31:43.633 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.634 --rc genhtml_branch_coverage=1 00:31:43.634 --rc genhtml_function_coverage=1 00:31:43.634 --rc genhtml_legend=1 00:31:43.634 --rc geninfo_all_blocks=1 00:31:43.634 --rc geninfo_unexecuted_blocks=1 00:31:43.634 00:31:43.634 ' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:43.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.634 --rc genhtml_branch_coverage=1 00:31:43.634 --rc genhtml_function_coverage=1 00:31:43.634 --rc genhtml_legend=1 00:31:43.634 --rc geninfo_all_blocks=1 00:31:43.634 --rc geninfo_unexecuted_blocks=1 00:31:43.634 00:31:43.634 ' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:43.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:43.634 06:22:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:50.207 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:50.207 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:50.207 Found net devices under 0000:af:00.0: cvl_0_0 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:50.207 Found net devices under 0000:af:00.1: cvl_0_1 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.207 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:50.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:31:50.208 00:31:50.208 --- 10.0.0.2 ping statistics --- 00:31:50.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.208 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:50.208 00:31:50.208 --- 10.0.0.1 ping statistics --- 00:31:50.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.208 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1137291 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1137291 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1137291 ']' 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 [2024-12-15 06:22:09.539261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:50.208 [2024-12-15 06:22:09.539304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.208 [2024-12-15 06:22:09.618857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.208 [2024-12-15 06:22:09.641816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.208 [2024-12-15 06:22:09.641854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.208 [2024-12-15 06:22:09.641861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.208 [2024-12-15 06:22:09.641871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.208 [2024-12-15 06:22:09.641876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.208 [2024-12-15 06:22:09.643182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.208 [2024-12-15 06:22:09.643293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.208 [2024-12-15 06:22:09.643379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.208 [2024-12-15 06:22:09.643380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:50.208 [2024-12-15 06:22:09.904483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.208 06:22:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:50.208 Malloc1 00:31:50.208 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:50.467 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:50.467 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.724 [2024-12-15 06:22:10.767556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.724 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:50.981 06:22:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:50.981 06:22:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:51.239 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:51.239 fio-3.35 00:31:51.239 Starting 1 thread 00:31:53.763 00:31:53.763 test: (groupid=0, jobs=1): err= 0: pid=1137811: Sun Dec 15 06:22:13 2024 00:31:53.763 read: IOPS=12.0k, BW=47.1MiB/s (49.3MB/s)(94.4MiB/2005msec) 00:31:53.763 slat (nsec): min=1518, max=253356, avg=1688.32, stdev=2257.19 00:31:53.763 clat (usec): min=3077, max=10480, avg=5878.95, stdev=439.28 00:31:53.763 lat (usec): min=3116, max=10482, avg=5880.64, stdev=439.22 00:31:53.763 clat percentiles (usec): 00:31:53.763 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:31:53.763 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:31:53.763 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6521], 00:31:53.763 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[ 9241], 00:31:53.763 | 99.99th=[10028] 00:31:53.763 bw ( KiB/s): min=47297, max=48720, per=99.90%, avg=48140.25, stdev=662.97, samples=4 00:31:53.763 iops : min=11824, max=12180, avg=12035.00, stdev=165.85, samples=4 00:31:53.763 write: IOPS=12.0k, BW=46.9MiB/s (49.1MB/s)(94.0MiB/2005msec); 0 zone resets 00:31:53.763 slat (nsec): min=1564, max=224908, avg=1758.94, stdev=1635.64 00:31:53.763 clat (usec): min=2430, max=8639, avg=4732.54, stdev=356.31 00:31:53.763 lat (usec): min=2446, max=8641, avg=4734.30, stdev=356.35 00:31:53.763 clat percentiles (usec): 00:31:53.763 | 1.00th=[ 3884], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4424], 00:31:53.763 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4817], 00:31:53.763 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:31:53.763 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6587], 99.95th=[ 7439], 00:31:53.763 | 99.99th=[ 8586] 00:31:53.763 bw ( KiB/s): min=47616, max=48512, per=99.97%, avg=47975.75, stdev=397.39, samples=4 00:31:53.763 iops : min=11904, max=12128, avg=11993.75, stdev=99.31, samples=4 00:31:53.763 lat (msec) : 4=0.86%, 10=99.13%, 20=0.01% 00:31:53.763 cpu : usr=72.85%, sys=26.10%, ctx=100, majf=0, minf=3 00:31:53.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:53.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:53.763 issued rwts: total=24154,24055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:53.763 00:31:53.763 Run status group 0 (all jobs): 00:31:53.763 READ: bw=47.1MiB/s (49.3MB/s), 47.1MiB/s-47.1MiB/s (49.3MB/s-49.3MB/s), io=94.4MiB (98.9MB), run=2005-2005msec 00:31:53.763 WRITE: bw=46.9MiB/s (49.1MB/s), 46.9MiB/s-46.9MiB/s (49.1MB/s-49.1MB/s), io=94.0MiB (98.5MB), run=2005-2005msec 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.763 06:22:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:54.021 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:54.021 fio-3.35 00:31:54.021 Starting 1 thread 00:31:56.546 00:31:56.546 test: (groupid=0, jobs=1): err= 0: pid=1138292: Sun Dec 15 06:22:16 2024 00:31:56.546 read: IOPS=11.1k, BW=173MiB/s (182MB/s)(348MiB/2006msec) 00:31:56.546 slat (nsec): min=2488, max=87686, avg=2786.82, stdev=1252.85 00:31:56.546 clat (usec): min=1981, max=12700, avg=6660.14, stdev=1562.14 00:31:56.546 lat (usec): min=1983, max=12703, avg=6662.93, stdev=1562.27 00:31:56.546 clat percentiles (usec): 00:31:56.546 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5276], 00:31:56.546 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 7111], 00:31:56.546 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9241], 00:31:56.546 | 99.00th=[10814], 99.50th=[11600], 99.90th=[12387], 99.95th=[12518], 00:31:56.546 | 99.99th=[12649] 00:31:56.546 bw ( KiB/s): min=81632, max=94690, per=50.32%, avg=89320.50, stdev=5498.35, samples=4 00:31:56.546 iops : min= 5102, max= 5918, avg=5582.50, stdev=343.61, samples=4 00:31:56.546 write: IOPS=6471, BW=101MiB/s (106MB/s)(183MiB/1806msec); 0 zone resets 00:31:56.546 slat (usec): min=29, max=380, avg=31.42, stdev= 7.31 00:31:56.546 clat (usec): min=4674, max=15126, avg=8595.93, stdev=1547.19 00:31:56.546 lat (usec): min=4703, max=15156, avg=8627.35, stdev=1548.49 00:31:56.546 clat percentiles (usec): 00:31:56.546 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:56.546 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:31:56.546 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11469], 00:31:56.546 | 99.00th=[12911], 99.50th=[13960], 99.90th=[14746], 99.95th=[14877], 00:31:56.546 | 99.99th=[15008] 00:31:56.546 bw ( KiB/s): min=84928, max=98459, per=89.74%, avg=92926.75, stdev=5849.38, samples=4 00:31:56.546 iops : min= 5308, max= 6153, avg=5807.75, stdev=365.37, samples=4 00:31:56.546 lat (msec) : 2=0.01%, 4=1.46%, 10=90.87%, 20=7.66% 00:31:56.546 cpu : usr=87.34%, sys=11.91%, ctx=62, majf=0, minf=3 00:31:56.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:56.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:56.546 issued rwts: total=22256,11688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:56.546 00:31:56.546 Run status group 0 (all jobs): 00:31:56.546 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=348MiB (365MB), run=2006-2006msec 00:31:56.546 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=183MiB (191MB), run=1806-1806msec 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:56.546 06:22:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:59.825 Nvme0n1 00:31:59.825 06:22:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d545888d-4a00-4165-982e-46bf2c9555c0 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d545888d-4a00-4165-982e-46bf2c9555c0 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d545888d-4a00-4165-982e-46bf2c9555c0 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:03.100 { 00:32:03.100 "uuid": "d545888d-4a00-4165-982e-46bf2c9555c0", 00:32:03.100 "name": "lvs_0", 00:32:03.100 "base_bdev": "Nvme0n1", 00:32:03.100 "total_data_clusters": 930, 00:32:03.100 "free_clusters": 930, 00:32:03.100 "block_size": 512, 00:32:03.100 "cluster_size": 1073741824 00:32:03.100 } 00:32:03.100 ]' 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d545888d-4a00-4165-982e-46bf2c9555c0") .free_clusters' 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d545888d-4a00-4165-982e-46bf2c9555c0") .cluster_size' 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:03.100 952320 00:32:03.100 06:22:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:03.100 d0d17c75-b31c-4c02-8cbd-654b3f66c0ec 00:32:03.100 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:03.358 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:03.615 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:03.897 06:22:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:04.159 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:04.159 fio-3.35 00:32:04.159 Starting 1 thread 00:32:06.698 00:32:06.698 test: (groupid=0, jobs=1): err= 0: pid=1140010: Sun Dec 15 06:22:26 2024 00:32:06.698 read: IOPS=8171, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2006msec) 00:32:06.698 slat (nsec): min=1518, max=89714, avg=1637.26, stdev=1016.50 00:32:06.698 clat (usec): min=687, max=169814, avg=8588.60, stdev=10202.91 00:32:06.698 lat (usec): min=688, max=169831, avg=8590.23, stdev=10203.06 00:32:06.698 clat percentiles (msec): 00:32:06.698 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:06.698 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:32:06.698 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:32:06.698 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:32:06.698 | 99.99th=[ 169] 00:32:06.698 bw ( KiB/s): min=23144, max=36008, per=99.85%, avg=32636.00, stdev=6330.47, samples=4 00:32:06.698 iops : min= 5786, max= 9002, avg=8159.00, stdev=1582.62, samples=4 00:32:06.698 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2006msec); 0 zone resets 00:32:06.698 slat (nsec): min=1554, max=75578, avg=1710.26, stdev=719.82 00:32:06.698 clat (usec): min=200, max=168449, avg=6967.75, stdev=9528.32 00:32:06.698 lat (usec): min=202, max=168453, avg=6969.46, stdev=9528.50 00:32:06.698 clat percentiles (msec): 00:32:06.698 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:06.698 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:06.698 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:06.698 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:32:06.698 | 99.99th=[ 169] 00:32:06.698 bw ( KiB/s): min=24168, max=35640, per=99.98%, avg=32664.00, stdev=5665.46, samples=4 00:32:06.698 iops : min= 6042, max= 8910, avg=8166.00, stdev=1416.36, samples=4 00:32:06.698 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:06.698 lat (msec) : 2=0.05%, 4=0.25%, 10=99.13%, 20=0.16%, 250=0.39% 00:32:06.698 cpu : usr=69.63%, sys=29.53%, ctx=69, majf=0, minf=3 00:32:06.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:06.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.698 issued rwts: total=16392,16385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.698 00:32:06.698 Run status group 0 (all jobs): 00:32:06.698 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:32:06.698 WRITE: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:32:06.698 06:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:06.698 06:22:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=bab86e72-e3ba-4ff9-9031-67c33ce8ab72 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb bab86e72-e3ba-4ff9-9031-67c33ce8ab72 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=bab86e72-e3ba-4ff9-9031-67c33ce8ab72 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:07.631 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:07.889 { 00:32:07.889 "uuid": "d545888d-4a00-4165-982e-46bf2c9555c0", 00:32:07.889 "name": "lvs_0", 00:32:07.889 "base_bdev": "Nvme0n1", 00:32:07.889 "total_data_clusters": 930, 00:32:07.889 "free_clusters": 0, 00:32:07.889 "block_size": 512, 00:32:07.889 "cluster_size": 1073741824 00:32:07.889 }, 00:32:07.889 { 00:32:07.889 "uuid": "bab86e72-e3ba-4ff9-9031-67c33ce8ab72", 00:32:07.889 "name": "lvs_n_0", 00:32:07.889 "base_bdev": "d0d17c75-b31c-4c02-8cbd-654b3f66c0ec", 00:32:07.889 "total_data_clusters": 237847, 00:32:07.889 "free_clusters": 237847, 00:32:07.889 "block_size": 512, 00:32:07.889 "cluster_size": 4194304 00:32:07.889 } 00:32:07.889 ]' 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="bab86e72-e3ba-4ff9-9031-67c33ce8ab72") .free_clusters' 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="bab86e72-e3ba-4ff9-9031-67c33ce8ab72") .cluster_size' 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:07.889 951388 00:32:07.889 06:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:08.454 e03ae083-c90d-4e51-84fe-6c8ac41a2a9d 00:32:08.454 06:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:08.712 06:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:08.712 06:22:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.969 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:08.970 06:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:09.227 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:09.227 fio-3.35 00:32:09.227 Starting 1 thread 00:32:11.759 00:32:11.759 test: (groupid=0, jobs=1): err= 0: pid=1140940: Sun Dec 15 06:22:31 2024 00:32:11.759 read: IOPS=7882, BW=30.8MiB/s (32.3MB/s)(61.8MiB/2006msec) 00:32:11.759 slat (nsec): min=1488, max=90689, avg=1694.32, stdev=1035.26 00:32:11.759 clat (usec): min=2876, max=14383, avg=8968.93, stdev=773.08 00:32:11.759 lat (usec): min=2880, max=14384, avg=8970.63, stdev=773.02 00:32:11.759 clat percentiles (usec): 00:32:11.759 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:32:11.759 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:11.759 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:32:11.759 | 99.00th=[10814], 99.50th=[11076], 99.90th=[13042], 99.95th=[13304], 00:32:11.759 | 99.99th=[14353] 00:32:11.759 bw ( KiB/s): min=30112, max=32096, per=99.84%, avg=31480.00, stdev=921.80, samples=4 00:32:11.759 iops : min= 7528, max= 8024, avg=7870.00, stdev=230.45, samples=4 00:32:11.759 write: IOPS=7855, BW=30.7MiB/s (32.2MB/s)(61.6MiB/2006msec); 0 zone resets 00:32:11.760 slat (nsec): min=1542, max=158805, avg=1758.87, stdev=1307.59 00:32:11.760 clat (usec): min=1332, max=12287, avg=7205.24, stdev=659.56 00:32:11.760 lat (usec): min=1337, max=12289, avg=7207.00, stdev=659.56 00:32:11.760 clat percentiles (usec): 00:32:11.760 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 6718], 00:32:11.760 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:32:11.760 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 00:32:11.760 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12125], 00:32:11.760 | 99.99th=[12256] 00:32:11.760 bw ( KiB/s): min=31176, max=31488, per=99.96%, avg=31410.00, stdev=156.00, samples=4 00:32:11.760 iops : min= 7794, max= 7872, avg=7852.50, stdev=39.00, samples=4 00:32:11.760 lat (msec) : 2=0.01%, 4=0.11%, 10=95.91%, 20=3.97% 00:32:11.760 cpu : usr=73.42%, sys=25.49%, ctx=161, majf=0, minf=3 00:32:11.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:11.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:11.760 issued rwts: total=15813,15759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:11.760 00:32:11.760 Run status group 0 (all jobs): 00:32:11.760 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.8MiB (64.8MB), run=2006-2006msec 00:32:11.760 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.6MiB (64.5MB), run=2006-2006msec 00:32:11.760 06:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:12.019 06:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:12.019 06:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:16.198 06:22:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:16.198 06:22:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:18.724 06:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:18.982 06:22:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.880 rmmod nvme_tcp 00:32:20.880 rmmod nvme_fabrics 00:32:20.880 rmmod nvme_keyring 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1137291 ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1137291 ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137291' 00:32:20.880 killing process with pid 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1137291 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.880 06:22:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:23.416 00:32:23.416 real 0m39.716s 00:32:23.416 user 2m39.460s 00:32:23.416 sys 0m8.860s 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.416 ************************************ 00:32:23.416 END TEST nvmf_fio_host 00:32:23.416 ************************************ 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.416 ************************************ 00:32:23.416 START TEST nvmf_failover 00:32:23.416 ************************************ 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:23.416 * Looking for test storage... 00:32:23.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.416 --rc genhtml_branch_coverage=1 00:32:23.416 --rc genhtml_function_coverage=1 00:32:23.416 --rc genhtml_legend=1 00:32:23.416 --rc geninfo_all_blocks=1 00:32:23.416 --rc geninfo_unexecuted_blocks=1 00:32:23.416 00:32:23.416 ' 00:32:23.416 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:23.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.416 --rc genhtml_branch_coverage=1 00:32:23.416 --rc genhtml_function_coverage=1 00:32:23.416 --rc genhtml_legend=1 00:32:23.416 --rc geninfo_all_blocks=1 00:32:23.416 --rc geninfo_unexecuted_blocks=1 00:32:23.417 00:32:23.417 ' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.417 --rc genhtml_branch_coverage=1 00:32:23.417 --rc genhtml_function_coverage=1 00:32:23.417 --rc genhtml_legend=1 00:32:23.417 --rc geninfo_all_blocks=1 00:32:23.417 --rc geninfo_unexecuted_blocks=1 00:32:23.417 00:32:23.417 ' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:23.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.417 --rc genhtml_branch_coverage=1 00:32:23.417 --rc genhtml_function_coverage=1 00:32:23.417 --rc genhtml_legend=1 00:32:23.417 --rc geninfo_all_blocks=1 00:32:23.417 --rc geninfo_unexecuted_blocks=1 00:32:23.417 00:32:23.417 ' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:23.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.417 06:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.986 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:29.987 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:29.987 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:29.987 Found net devices under 0000:af:00.0: cvl_0_0 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:29.987 Found net devices under 0000:af:00.1: cvl_0_1 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.987 06:22:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:32:29.987 00:32:29.987 --- 10.0.0.2 ping statistics --- 00:32:29.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.987 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:32:29.987 00:32:29.987 --- 10.0.0.1 ping statistics --- 00:32:29.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.987 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1146175 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1146175 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146175 ']' 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.987 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.988 [2024-12-15 06:22:49.295216] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:29.988 [2024-12-15 06:22:49.295259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.988 [2024-12-15 06:22:49.373912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:29.988 [2024-12-15 06:22:49.395728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.988 [2024-12-15 06:22:49.395763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.988 [2024-12-15 06:22:49.395770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.988 [2024-12-15 06:22:49.395776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.988 [2024-12-15 06:22:49.395781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.988 [2024-12-15 06:22:49.397091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.988 [2024-12-15 06:22:49.397196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.988 [2024-12-15 06:22:49.397196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:29.988 [2024-12-15 06:22:49.696082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:29.988 Malloc0 00:32:29.988 06:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.245 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.245 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.503 [2024-12-15 06:22:50.530922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.503 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:30.760 [2024-12-15 06:22:50.727484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:30.760 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:31.017 [2024-12-15 06:22:50.924121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1146433 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1146433 /var/tmp/bdevperf.sock 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146433 ']' 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.018 06:22:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.274 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.274 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:31.274 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.531 NVMe0n1 00:32:31.531 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.788 00:32:31.788 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1146645 00:32:31.788 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:31.788 06:22:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:32.719 06:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.976 [2024-12-15 06:22:52.938387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.976 [2024-12-15 06:22:52.938583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 [2024-12-15 06:22:52.938767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3eac0 is same with the state(6) to be set 00:32:32.977 06:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:36.248 06:22:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:36.248 00:32:36.248 06:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.504 [2024-12-15 06:22:56.430769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.504 [2024-12-15 06:22:56.430997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 [2024-12-15 06:22:56.431100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3f8e0 is same with the state(6) to be set 00:32:36.505 06:22:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:39.939 06:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.939 [2024-12-15 06:22:59.640622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.939 06:22:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:40.870 06:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:40.870 [2024-12-15 06:23:00.856129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 [2024-12-15 06:23:00.856670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40690 is same with the state(6) to be set 00:32:40.870 06:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1146645 00:32:47.422 { 00:32:47.422 "results": [ 00:32:47.422 { 00:32:47.422 "job": "NVMe0n1", 00:32:47.422 "core_mask": "0x1", 00:32:47.422 "workload": "verify", 00:32:47.422 "status": "finished", 00:32:47.422 "verify_range": { 00:32:47.422 "start": 0, 00:32:47.422 "length": 16384 00:32:47.422 }, 00:32:47.422 "queue_depth": 128, 00:32:47.422 "io_size": 4096, 00:32:47.422 "runtime": 15.011012, 00:32:47.422 "iops": 11225.625560754997, 00:32:47.422 "mibps": 43.85009984669921, 00:32:47.422 "io_failed": 7229, 00:32:47.422 "io_timeout": 0, 00:32:47.422 "avg_latency_us": 10910.681321942935, 00:32:47.422 "min_latency_us": 433.0057142857143, 00:32:47.422 "max_latency_us": 20721.859047619047 00:32:47.422 } 00:32:47.422 ], 00:32:47.422 "core_count": 1 00:32:47.422 } 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1146433 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146433 ']' 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146433 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146433 00:32:47.422 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.423 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.423 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146433' 00:32:47.423 killing process with pid 1146433 00:32:47.423 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146433 00:32:47.423 06:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146433 00:32:47.423 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:47.423 [2024-12-15 06:22:50.988269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:47.423 [2024-12-15 06:22:50.988323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146433 ] 00:32:47.423 [2024-12-15 06:22:51.063303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.423 [2024-12-15 06:22:51.085760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.423 Running I/O for 15 seconds... 00:32:47.423 11351.00 IOPS, 44.34 MiB/s [2024-12-15T05:23:07.563Z] [2024-12-15 06:22:52.940135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.423 [2024-12-15 06:22:52.940277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.423 [2024-12-15 06:22:52.940292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.423 [2024-12-15 06:22:52.940659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.423 [2024-12-15 06:22:52.940667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.423 [2024-12-15 06:22:52.940673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.940983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.940995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.424 [2024-12-15 06:22:52.941235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.424 [2024-12-15 06:22:52.941241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.425 [2024-12-15 06:22:52.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.425 [2024-12-15 06:22:52.941761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.425 [2024-12-15 06:22:52.941768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.426 [2024-12-15 06:22:52.941954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.941973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.941980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.941987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.426 [2024-12-15 06:22:52.942164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.426 [2024-12-15 06:22:52.942170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:32:47.426 [2024-12-15 06:22:52.942176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942220] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:47.426 [2024-12-15 06:22:52.942242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.426 [2024-12-15 06:22:52.942250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.426 [2024-12-15 06:22:52.942264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.426 [2024-12-15 06:22:52.942279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.426 [2024-12-15 06:22:52.942294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:52.942301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:47.426 [2024-12-15 06:22:52.952728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26460 (9): Bad file descriptor 00:32:47.426 [2024-12-15 06:22:52.955686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:47.426 [2024-12-15 06:22:52.984240] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:47.426 11068.00 IOPS, 43.23 MiB/s [2024-12-15T05:23:07.566Z] 11209.00 IOPS, 43.79 MiB/s [2024-12-15T05:23:07.566Z] 11243.75 IOPS, 43.92 MiB/s [2024-12-15T05:23:07.566Z] [2024-12-15 06:22:56.431866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.426 [2024-12-15 06:22:56.431985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.426 [2024-12-15 06:22:56.431998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.427 [2024-12-15 06:22:56.432476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.427 [2024-12-15 06:22:56.432484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.432984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.432996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.433003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.433013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.433020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.433028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.433036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.433044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.433051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.433059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.428 [2024-12-15 06:22:56.433066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.428 [2024-12-15 06:22:56.433074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.429 [2024-12-15 06:22:56.433319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.429 [2024-12-15 06:22:56.433662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.429 [2024-12-15 06:22:56.433671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.430 [2024-12-15 06:22:56.433797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.430 [2024-12-15 06:22:56.433823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31184 len:8 PRP1 0x0 PRP2 0x0 00:32:47.430 [2024-12-15 06:22:56.433830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.430 [2024-12-15 06:22:56.433846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.430 [2024-12-15 06:22:56.433852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31192 len:8 PRP1 0x0 PRP2 0x0 00:32:47.430 [2024-12-15 06:22:56.433858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433901] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:47.430 [2024-12-15 06:22:56.433922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.430 [2024-12-15 06:22:56.433930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.430 [2024-12-15 06:22:56.433944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.430 [2024-12-15 06:22:56.433957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.430 [2024-12-15 06:22:56.433972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:22:56.433979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:47.430 [2024-12-15 06:22:56.434013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26460 (9): Bad file descriptor 00:32:47.430 [2024-12-15 06:22:56.436791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:47.430 [2024-12-15 06:22:56.461112] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:47.430 11185.20 IOPS, 43.69 MiB/s [2024-12-15T05:23:07.570Z] 11194.50 IOPS, 43.73 MiB/s [2024-12-15T05:23:07.570Z] 11259.29 IOPS, 43.98 MiB/s [2024-12-15T05:23:07.570Z] 11254.75 IOPS, 43.96 MiB/s [2024-12-15T05:23:07.570Z] 11266.22 IOPS, 44.01 MiB/s [2024-12-15T05:23:07.570Z] [2024-12-15 06:23:00.857807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.857982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.857989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.430 [2024-12-15 06:23:00.858142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.430 [2024-12-15 06:23:00.858149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.431 [2024-12-15 06:23:00.858400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.431 [2024-12-15 06:23:00.858733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.431 [2024-12-15 06:23:00.858739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.858982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.858989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.432 [2024-12-15 06:23:00.859236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.432 [2024-12-15 06:23:00.859244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.433 [2024-12-15 06:23:00.859667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47032 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.433 [2024-12-15 06:23:00.859718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47040 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.433 [2024-12-15 06:23:00.859741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47048 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.433 [2024-12-15 06:23:00.859768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47056 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.433 [2024-12-15 06:23:00.859790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47064 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:47.433 [2024-12-15 06:23:00.859815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:47.433 [2024-12-15 06:23:00.859821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47072 len:8 PRP1 0x0 PRP2 0x0 00:32:47.433 [2024-12-15 06:23:00.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859868] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:47.433 [2024-12-15 06:23:00.859891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.433 [2024-12-15 06:23:00.859898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.433 [2024-12-15 06:23:00.859907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.434 [2024-12-15 06:23:00.859913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.434 [2024-12-15 06:23:00.859920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.434 [2024-12-15 06:23:00.859927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.434 [2024-12-15 06:23:00.859933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:47.434 [2024-12-15 06:23:00.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:47.434 [2024-12-15 06:23:00.859946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:47.434 [2024-12-15 06:23:00.859978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc26460 (9): Bad file descriptor 00:32:47.434 [2024-12-15 06:23:00.862735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:47.434 [2024-12-15 06:23:00.965453] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:47.434 11156.90 IOPS, 43.58 MiB/s [2024-12-15T05:23:07.574Z] 11176.18 IOPS, 43.66 MiB/s [2024-12-15T05:23:07.574Z] 11185.00 IOPS, 43.69 MiB/s [2024-12-15T05:23:07.574Z] 11199.92 IOPS, 43.75 MiB/s [2024-12-15T05:23:07.574Z] 11209.93 IOPS, 43.79 MiB/s [2024-12-15T05:23:07.574Z] 11233.27 IOPS, 43.88 MiB/s 00:32:47.434 Latency(us) 00:32:47.434 [2024-12-15T05:23:07.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.434 Verification LBA range: start 0x0 length 0x4000 00:32:47.434 NVMe0n1 : 15.01 11225.63 43.85 481.58 0.00 10910.68 433.01 20721.86 00:32:47.434 [2024-12-15T05:23:07.574Z] =================================================================================================================== 00:32:47.434 [2024-12-15T05:23:07.574Z] Total : 11225.63 43.85 481.58 0.00 10910.68 433.01 20721.86 00:32:47.434 Received shutdown signal, test time was about 15.000000 seconds 00:32:47.434 00:32:47.434 Latency(us) 00:32:47.434 [2024-12-15T05:23:07.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.434 [2024-12-15T05:23:07.574Z] =================================================================================================================== 00:32:47.434 [2024-12-15T05:23:07.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1149078 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1149078 /var/tmp/bdevperf.sock 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1149078 ']' 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:47.434 [2024-12-15 06:23:07.523460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:47.434 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:47.690 [2024-12-15 06:23:07.712020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:47.690 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.947 NVMe0n1 00:32:47.947 06:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:48.203 00:32:48.203 06:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:48.766 00:32:48.766 06:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:48.766 06:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:48.766 06:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.022 06:23:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:52.292 06:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:52.292 06:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:52.292 06:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1149789 00:32:52.292 06:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:52.292 06:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1149789 00:32:53.660 { 00:32:53.660 "results": [ 00:32:53.660 { 00:32:53.660 "job": "NVMe0n1", 00:32:53.660 "core_mask": "0x1", 00:32:53.660 "workload": "verify", 00:32:53.660 "status": "finished", 00:32:53.660 "verify_range": { 00:32:53.660 "start": 0, 00:32:53.660 "length": 16384 00:32:53.660 }, 00:32:53.660 "queue_depth": 128, 00:32:53.660 "io_size": 4096, 00:32:53.660 "runtime": 1.003963, 00:32:53.660 "iops": 11240.454080479061, 00:32:53.660 "mibps": 43.90802375187133, 00:32:53.660 "io_failed": 0, 00:32:53.660 "io_timeout": 0, 00:32:53.660 "avg_latency_us": 11345.025390805326, 00:32:53.660 "min_latency_us": 764.5866666666667, 00:32:53.660 "max_latency_us": 11734.064761904761 00:32:53.660 } 00:32:53.660 ], 00:32:53.660 "core_count": 1 00:32:53.660 } 00:32:53.660 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:53.660 [2024-12-15 06:23:07.164418] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:53.660 [2024-12-15 06:23:07.164472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149078 ] 00:32:53.660 [2024-12-15 06:23:07.239997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.660 [2024-12-15 06:23:07.260506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.660 [2024-12-15 06:23:09.023794] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:53.660 [2024-12-15 06:23:09.023841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.660 [2024-12-15 06:23:09.023853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.661 [2024-12-15 06:23:09.023862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.661 [2024-12-15 06:23:09.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.661 [2024-12-15 06:23:09.023875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.661 [2024-12-15 06:23:09.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.661 [2024-12-15 06:23:09.023888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:53.661 [2024-12-15 06:23:09.023894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.661 [2024-12-15 06:23:09.023901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:53.661 [2024-12-15 06:23:09.023925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:53.661 [2024-12-15 06:23:09.023939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b8460 (9): Bad file descriptor 00:32:53.661 [2024-12-15 06:23:09.069013] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:53.661 Running I/O for 1 seconds... 00:32:53.661 11157.00 IOPS, 43.58 MiB/s 00:32:53.661 Latency(us) 00:32:53.661 [2024-12-15T05:23:13.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.661 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:53.661 Verification LBA range: start 0x0 length 0x4000 00:32:53.661 NVMe0n1 : 1.00 11240.45 43.91 0.00 0.00 11345.03 764.59 11734.06 00:32:53.661 [2024-12-15T05:23:13.801Z] =================================================================================================================== 00:32:53.661 [2024-12-15T05:23:13.801Z] Total : 11240.45 43.91 0.00 0.00 11345.03 764.59 11734.06 00:32:53.661 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.661 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:53.661 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:53.661 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.661 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:53.916 06:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.172 06:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1149078 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1149078 ']' 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1149078 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149078 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149078' 00:32:57.448 killing process with pid 1149078 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1149078 00:32:57.448 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1149078 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.704 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.704 rmmod nvme_tcp 00:32:57.704 rmmod nvme_fabrics 00:32:57.961 rmmod nvme_keyring 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1146175 ']' 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1146175 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146175 ']' 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146175 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146175 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146175' 00:32:57.961 killing process with pid 1146175 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146175 00:32:57.961 06:23:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146175 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:58.219 06:23:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:00.124 00:33:00.124 real 0m37.041s 00:33:00.124 user 1m57.217s 00:33:00.124 sys 0m7.880s 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:00.124 ************************************ 00:33:00.124 END TEST nvmf_failover 00:33:00.124 ************************************ 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.124 ************************************ 00:33:00.124 START TEST nvmf_host_discovery 00:33:00.124 ************************************ 00:33:00.124 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:00.384 * Looking for test storage... 00:33:00.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:00.384 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:00.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.385 --rc genhtml_branch_coverage=1 00:33:00.385 --rc genhtml_function_coverage=1 00:33:00.385 --rc genhtml_legend=1 00:33:00.385 --rc geninfo_all_blocks=1 00:33:00.385 --rc geninfo_unexecuted_blocks=1 00:33:00.385 00:33:00.385 ' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:00.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.385 --rc genhtml_branch_coverage=1 00:33:00.385 --rc genhtml_function_coverage=1 00:33:00.385 --rc genhtml_legend=1 00:33:00.385 --rc geninfo_all_blocks=1 00:33:00.385 --rc geninfo_unexecuted_blocks=1 00:33:00.385 00:33:00.385 ' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:00.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.385 --rc genhtml_branch_coverage=1 00:33:00.385 --rc genhtml_function_coverage=1 00:33:00.385 --rc genhtml_legend=1 00:33:00.385 --rc geninfo_all_blocks=1 00:33:00.385 --rc geninfo_unexecuted_blocks=1 00:33:00.385 00:33:00.385 ' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:00.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:00.385 --rc genhtml_branch_coverage=1 00:33:00.385 --rc genhtml_function_coverage=1 00:33:00.385 --rc genhtml_legend=1 00:33:00.385 --rc geninfo_all_blocks=1 00:33:00.385 --rc geninfo_unexecuted_blocks=1 00:33:00.385 00:33:00.385 ' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:00.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:00.385 06:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:06.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:06.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.955 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:06.956 Found net devices under 0000:af:00.0: cvl_0_0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:06.956 Found net devices under 0000:af:00.1: cvl_0_1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:33:06.956 00:33:06.956 --- 10.0.0.2 ping statistics --- 00:33:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.956 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:33:06.956 00:33:06.956 --- 10.0.0.1 ping statistics --- 00:33:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.956 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1154157 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1154157 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154157 ']' 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 [2024-12-15 06:23:26.552655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:06.956 [2024-12-15 06:23:26.552707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.956 [2024-12-15 06:23:26.633476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.956 [2024-12-15 06:23:26.655053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.956 [2024-12-15 06:23:26.655089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.956 [2024-12-15 06:23:26.655097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.956 [2024-12-15 06:23:26.655103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.956 [2024-12-15 06:23:26.655108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.956 [2024-12-15 06:23:26.655587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 [2024-12-15 06:23:26.786834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 [2024-12-15 06:23:26.799005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 null0 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.956 null1 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.956 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1154265 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1154265 /tmp/host.sock 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154265 ']' 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:06.957 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.957 06:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.957 [2024-12-15 06:23:26.874804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:06.957 [2024-12-15 06:23:26.874848] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154265 ] 00:33:06.957 [2024-12-15 06:23:26.947983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.957 [2024-12-15 06:23:26.970888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.957 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.215 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.473 [2024-12-15 06:23:27.372467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.473 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:07.474 06:23:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:08.040 [2024-12-15 06:23:28.125144] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:08.040 [2024-12-15 06:23:28.125166] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:08.040 [2024-12-15 06:23:28.125182] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:08.298 [2024-12-15 06:23:28.211431] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:08.298 [2024-12-15 06:23:28.307031] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:08.298 [2024-12-15 06:23:28.307656] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x716c60:1 started. 00:33:08.298 [2024-12-15 06:23:28.309036] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:08.298 [2024-12-15 06:23:28.309053] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:08.298 [2024-12-15 06:23:28.353653] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x716c60 was disconnected and freed. delete nvme_qpair. 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.556 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.815 [2024-12-15 06:23:28.789806] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x716fe0:1 started. 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 [2024-12-15 06:23:28.835453] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x716fe0 was disconnected and freed. delete nvme_qpair. 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 [2024-12-15 06:23:28.888539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:08.815 [2024-12-15 06:23:28.889606] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:08.815 [2024-12-15 06:23:28.889625] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.815 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.074 [2024-12-15 06:23:28.975871] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.074 06:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.074 06:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.074 06:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:09.074 06:23:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:09.332 [2024-12-15 06:23:29.286140] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:09.332 [2024-12-15 06:23:29.286172] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:09.332 [2024-12-15 06:23:29.286180] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:09.332 [2024-12-15 06:23:29.286184] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.896 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.154 [2024-12-15 06:23:30.124432] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:10.154 [2024-12-15 06:23:30.124458] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:10.154 [2024-12-15 06:23:30.131590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.154 [2024-12-15 06:23:30.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.154 [2024-12-15 06:23:30.131618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.154 [2024-12-15 06:23:30.131626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.154 [2024-12-15 06:23:30.131633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.154 [2024-12-15 06:23:30.131640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.154 [2024-12-15 06:23:30.131647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.154 [2024-12-15 06:23:30.131654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.154 [2024-12-15 06:23:30.131665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:10.154 [2024-12-15 06:23:30.141601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.154 [2024-12-15 06:23:30.151636] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.154 [2024-12-15 06:23:30.151647] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.154 [2024-12-15 06:23:30.151654] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.151658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.154 [2024-12-15 06:23:30.151677] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.151921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.154 [2024-12-15 06:23:30.151936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.154 [2024-12-15 06:23:30.151945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.154 [2024-12-15 06:23:30.151956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.154 [2024-12-15 06:23:30.151967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.154 [2024-12-15 06:23:30.151973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.154 [2024-12-15 06:23:30.151982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.154 [2024-12-15 06:23:30.151988] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.154 [2024-12-15 06:23:30.151998] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.154 [2024-12-15 06:23:30.152003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.154 [2024-12-15 06:23:30.161708] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.154 [2024-12-15 06:23:30.161719] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.154 [2024-12-15 06:23:30.161724] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.161727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.154 [2024-12-15 06:23:30.161741] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.161967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.154 [2024-12-15 06:23:30.161983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.154 [2024-12-15 06:23:30.161995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.154 [2024-12-15 06:23:30.162007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.154 [2024-12-15 06:23:30.162016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.154 [2024-12-15 06:23:30.162023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.154 [2024-12-15 06:23:30.162029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.154 [2024-12-15 06:23:30.162035] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.154 [2024-12-15 06:23:30.162039] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.154 [2024-12-15 06:23:30.162043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.154 [2024-12-15 06:23:30.171772] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.154 [2024-12-15 06:23:30.171785] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.154 [2024-12-15 06:23:30.171789] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.171793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.154 [2024-12-15 06:23:30.171807] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.171923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.154 [2024-12-15 06:23:30.171936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.154 [2024-12-15 06:23:30.171943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.154 [2024-12-15 06:23:30.171954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.154 [2024-12-15 06:23:30.171963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.154 [2024-12-15 06:23:30.171969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.154 [2024-12-15 06:23:30.171976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.154 [2024-12-15 06:23:30.171981] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.154 [2024-12-15 06:23:30.171986] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.154 [2024-12-15 06:23:30.171990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:10.154 [2024-12-15 06:23:30.181837] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.154 [2024-12-15 06:23:30.181850] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.154 [2024-12-15 06:23:30.181855] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.154 [2024-12-15 06:23:30.181859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.154 [2024-12-15 06:23:30.181871] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.154 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:10.154 [2024-12-15 06:23:30.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.155 [2024-12-15 06:23:30.182113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.155 [2024-12-15 06:23:30.182121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.155 [2024-12-15 06:23:30.182131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.260 [2024-12-15 06:23:30.182140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.260 [2024-12-15 06:23:30.182147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.260 [2024-12-15 06:23:30.182154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.260 [2024-12-15 06:23:30.182160] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.260 [2024-12-15 06:23:30.182164] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.260 [2024-12-15 06:23:30.182168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.260 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:10.260 [2024-12-15 06:23:30.191903] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.260 [2024-12-15 06:23:30.191917] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.260 [2024-12-15 06:23:30.191921] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.260 [2024-12-15 06:23:30.191926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.260 [2024-12-15 06:23:30.191940] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.260 [2024-12-15 06:23:30.192021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.260 [2024-12-15 06:23:30.192035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.260 [2024-12-15 06:23:30.192042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.260 [2024-12-15 06:23:30.192057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.260 [2024-12-15 06:23:30.192066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.260 [2024-12-15 06:23:30.192072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.260 [2024-12-15 06:23:30.192079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.260 [2024-12-15 06:23:30.192084] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.260 [2024-12-15 06:23:30.192089] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.260 [2024-12-15 06:23:30.192093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.260 [2024-12-15 06:23:30.201971] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.260 [2024-12-15 06:23:30.201981] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.260 [2024-12-15 06:23:30.201985] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.260 [2024-12-15 06:23:30.201989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.260 [2024-12-15 06:23:30.202006] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.260 [2024-12-15 06:23:30.202223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.261 [2024-12-15 06:23:30.202235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.261 [2024-12-15 06:23:30.202243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.261 [2024-12-15 06:23:30.202253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.261 [2024-12-15 06:23:30.202262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.261 [2024-12-15 06:23:30.202268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.261 [2024-12-15 06:23:30.202275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.261 [2024-12-15 06:23:30.202280] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.261 [2024-12-15 06:23:30.202285] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.261 [2024-12-15 06:23:30.202288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.261 [2024-12-15 06:23:30.212037] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.261 [2024-12-15 06:23:30.212048] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.261 [2024-12-15 06:23:30.212053] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.212057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.261 [2024-12-15 06:23:30.212071] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.212312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.261 [2024-12-15 06:23:30.212325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.261 [2024-12-15 06:23:30.212335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.261 [2024-12-15 06:23:30.212345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.261 [2024-12-15 06:23:30.212355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.261 [2024-12-15 06:23:30.212361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.261 [2024-12-15 06:23:30.212367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.261 [2024-12-15 06:23:30.212373] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.261 [2024-12-15 06:23:30.212377] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.261 [2024-12-15 06:23:30.212381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:10.261 [2024-12-15 06:23:30.222102] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.261 [2024-12-15 06:23:30.222115] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.261 [2024-12-15 06:23:30.222119] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.222123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.261 [2024-12-15 06:23:30.222136] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:10.261 [2024-12-15 06:23:30.222294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.261 [2024-12-15 06:23:30.222309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.261 [2024-12-15 06:23:30.222316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.261 [2024-12-15 06:23:30.222326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.261 [2024-12-15 06:23:30.222335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.261 [2024-12-15 06:23:30.222341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.261 [2024-12-15 06:23:30.222348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.261 [2024-12-15 06:23:30.222353] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.261 [2024-12-15 06:23:30.222357] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.261 [2024-12-15 06:23:30.222361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:10.261 [2024-12-15 06:23:30.232167] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.261 [2024-12-15 06:23:30.232180] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.261 [2024-12-15 06:23:30.232184] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.232188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.261 [2024-12-15 06:23:30.232202] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.232401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.261 [2024-12-15 06:23:30.232413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.261 [2024-12-15 06:23:30.232421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.261 [2024-12-15 06:23:30.232431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.261 [2024-12-15 06:23:30.232441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.261 [2024-12-15 06:23:30.232447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.261 [2024-12-15 06:23:30.232453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.261 [2024-12-15 06:23:30.232459] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.261 [2024-12-15 06:23:30.232463] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.261 [2024-12-15 06:23:30.232467] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.261 [2024-12-15 06:23:30.242233] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:10.261 [2024-12-15 06:23:30.242244] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:10.261 [2024-12-15 06:23:30.242248] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.242252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:10.261 [2024-12-15 06:23:30.242264] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:10.261 [2024-12-15 06:23:30.242404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.261 [2024-12-15 06:23:30.242416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e8d70 with addr=10.0.0.2, port=4420 00:33:10.261 [2024-12-15 06:23:30.242423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e8d70 is same with the state(6) to be set 00:33:10.261 [2024-12-15 06:23:30.242436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e8d70 (9): Bad file descriptor 00:33:10.261 [2024-12-15 06:23:30.242445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:10.261 [2024-12-15 06:23:30.242451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:10.261 [2024-12-15 06:23:30.242457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:10.261 [2024-12-15 06:23:30.242463] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:10.261 [2024-12-15 06:23:30.242467] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:10.261 [2024-12-15 06:23:30.242471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:10.261 [2024-12-15 06:23:30.250804] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:10.261 [2024-12-15 06:23:30.250822] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:10.261 06:23:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.195 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:11.454 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:11.455 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:11.455 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:11.455 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.455 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.455 06:23:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.828 [2024-12-15 06:23:32.545760] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:12.828 [2024-12-15 06:23:32.545777] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:12.828 [2024-12-15 06:23:32.545788] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:12.828 [2024-12-15 06:23:32.633045] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:12.828 [2024-12-15 06:23:32.691483] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:12.828 [2024-12-15 06:23:32.692004] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x722d20:1 started. 00:33:12.828 [2024-12-15 06:23:32.693577] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.828 [2024-12-15 06:23:32.693601] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.828 [2024-12-15 06:23:32.694749] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x722d20 was disconnected and freed. delete nvme_qpair. 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.828 request: 00:33:12.828 { 00:33:12.828 "name": "nvme", 00:33:12.828 "trtype": "tcp", 00:33:12.828 "traddr": "10.0.0.2", 00:33:12.828 "adrfam": "ipv4", 00:33:12.828 "trsvcid": "8009", 00:33:12.828 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.828 "wait_for_attach": true, 00:33:12.828 "method": "bdev_nvme_start_discovery", 00:33:12.828 "req_id": 1 00:33:12.828 } 00:33:12.828 Got JSON-RPC error response 00:33:12.828 response: 00:33:12.828 { 00:33:12.828 "code": -17, 00:33:12.828 "message": "File exists" 00:33:12.828 } 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.828 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.828 request: 00:33:12.828 { 00:33:12.828 "name": "nvme_second", 00:33:12.828 "trtype": "tcp", 00:33:12.828 "traddr": "10.0.0.2", 00:33:12.828 "adrfam": "ipv4", 00:33:12.828 "trsvcid": "8009", 00:33:12.829 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:12.829 "wait_for_attach": true, 00:33:12.829 "method": "bdev_nvme_start_discovery", 00:33:12.829 "req_id": 1 00:33:12.829 } 00:33:12.829 Got JSON-RPC error response 00:33:12.829 response: 00:33:12.829 { 00:33:12.829 "code": -17, 00:33:12.829 "message": "File exists" 00:33:12.829 } 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.829 06:23:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.203 [2024-12-15 06:23:33.932868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.203 [2024-12-15 06:23:33.932894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x720680 with addr=10.0.0.2, port=8010 00:33:14.203 [2024-12-15 06:23:33.932907] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:14.203 [2024-12-15 06:23:33.932913] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:14.203 [2024-12-15 06:23:33.932920] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:15.136 [2024-12-15 06:23:34.935365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.136 [2024-12-15 06:23:34.935390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x720680 with addr=10.0.0.2, port=8010 00:33:15.136 [2024-12-15 06:23:34.935401] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:15.136 [2024-12-15 06:23:34.935408] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:15.136 [2024-12-15 06:23:34.935414] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:16.068 [2024-12-15 06:23:35.937610] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:16.068 request: 00:33:16.068 { 00:33:16.068 "name": "nvme_second", 00:33:16.068 "trtype": "tcp", 00:33:16.068 "traddr": "10.0.0.2", 00:33:16.068 "adrfam": "ipv4", 00:33:16.068 "trsvcid": "8010", 00:33:16.068 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:16.068 "wait_for_attach": false, 00:33:16.068 "attach_timeout_ms": 3000, 00:33:16.068 "method": "bdev_nvme_start_discovery", 00:33:16.068 "req_id": 1 00:33:16.068 } 00:33:16.068 Got JSON-RPC error response 00:33:16.068 response: 00:33:16.068 { 00:33:16.068 "code": -110, 00:33:16.068 "message": "Connection timed out" 00:33:16.068 } 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1154265 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.068 06:23:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.068 rmmod nvme_tcp 00:33:16.068 rmmod nvme_fabrics 00:33:16.068 rmmod nvme_keyring 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1154157 ']' 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1154157 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1154157 ']' 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1154157 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154157 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154157' 00:33:16.068 killing process with pid 1154157 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1154157 00:33:16.068 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1154157 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.327 06:23:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.232 00:33:18.232 real 0m18.070s 00:33:18.232 user 0m22.104s 00:33:18.232 sys 0m5.845s 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.232 ************************************ 00:33:18.232 END TEST nvmf_host_discovery 00:33:18.232 ************************************ 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.232 06:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.491 ************************************ 00:33:18.491 START TEST nvmf_host_multipath_status 00:33:18.491 ************************************ 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:18.491 * Looking for test storage... 00:33:18.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:18.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.491 --rc genhtml_branch_coverage=1 00:33:18.491 --rc genhtml_function_coverage=1 00:33:18.491 --rc genhtml_legend=1 00:33:18.491 --rc geninfo_all_blocks=1 00:33:18.491 --rc geninfo_unexecuted_blocks=1 00:33:18.491 00:33:18.491 ' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:18.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.491 --rc genhtml_branch_coverage=1 00:33:18.491 --rc genhtml_function_coverage=1 00:33:18.491 --rc genhtml_legend=1 00:33:18.491 --rc geninfo_all_blocks=1 00:33:18.491 --rc geninfo_unexecuted_blocks=1 00:33:18.491 00:33:18.491 ' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:18.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.491 --rc genhtml_branch_coverage=1 00:33:18.491 --rc genhtml_function_coverage=1 00:33:18.491 --rc genhtml_legend=1 00:33:18.491 --rc geninfo_all_blocks=1 00:33:18.491 --rc geninfo_unexecuted_blocks=1 00:33:18.491 00:33:18.491 ' 00:33:18.491 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:18.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.491 --rc genhtml_branch_coverage=1 00:33:18.491 --rc genhtml_function_coverage=1 00:33:18.491 --rc genhtml_legend=1 00:33:18.491 --rc geninfo_all_blocks=1 00:33:18.491 --rc geninfo_unexecuted_blocks=1 00:33:18.492 00:33:18.492 ' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.492 06:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:25.062 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:25.062 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.062 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:25.063 Found net devices under 0000:af:00.0: cvl_0_0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:25.063 Found net devices under 0000:af:00.1: cvl_0_1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:33:25.063 00:33:25.063 --- 10.0.0.2 ping statistics --- 00:33:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.063 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:33:25.063 00:33:25.063 --- 10.0.0.1 ping statistics --- 00:33:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.063 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1159378 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1159378 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159378 ']' 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.063 [2024-12-15 06:23:44.575950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:25.063 [2024-12-15 06:23:44.576005] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.063 [2024-12-15 06:23:44.637008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:25.063 [2024-12-15 06:23:44.659523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.063 [2024-12-15 06:23:44.659561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.063 [2024-12-15 06:23:44.659569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.063 [2024-12-15 06:23:44.659575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.063 [2024-12-15 06:23:44.659580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.063 [2024-12-15 06:23:44.662015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.063 [2024-12-15 06:23:44.662018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.063 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.064 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1159378 00:33:25.064 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:25.064 [2024-12-15 06:23:44.970625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.064 06:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:25.064 Malloc0 00:33:25.322 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:25.322 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.579 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.838 [2024-12-15 06:23:45.766405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.838 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:25.838 [2024-12-15 06:23:45.954882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1159624 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1159624 /var/tmp/bdevperf.sock 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159624 ']' 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.097 06:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:26.097 06:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.097 06:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:26.097 06:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:26.355 06:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:26.923 Nvme0n1 00:33:26.923 06:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:27.181 Nvme0n1 00:33:27.181 06:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:27.181 06:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:29.084 06:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:29.084 06:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:29.343 06:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:29.601 06:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:30.538 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:30.538 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:30.538 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.538 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:30.797 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.797 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:30.797 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.797 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.056 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.056 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.056 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.056 06:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.315 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.574 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.574 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:31.574 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:31.574 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.833 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.833 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:31.833 06:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:32.092 06:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:32.351 06:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:33.287 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:33.287 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:33.287 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.287 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:33.546 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.546 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:33.546 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.546 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.805 06:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:34.130 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.130 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:34.130 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.130 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:34.451 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.709 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:34.967 06:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:35.903 06:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:35.903 06:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:35.903 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.903 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.162 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.162 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:36.162 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.162 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.421 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.421 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.421 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.421 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.680 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.939 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.939 06:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.939 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.939 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.939 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.939 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.198 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.199 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:37.199 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.458 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:37.717 06:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:38.653 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:38.653 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:38.653 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.653 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.912 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.912 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:38.912 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.912 06:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.171 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.430 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.430 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.430 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.430 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.689 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.689 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:39.689 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.689 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.948 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.948 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:39.948 06:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:40.207 06:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:40.207 06:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:41.586 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.587 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.845 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.845 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.845 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.845 06:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:42.104 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.104 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:42.104 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.104 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:42.363 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:42.623 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:42.882 06:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:43.819 06:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:43.819 06:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:43.819 06:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.819 06:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:44.078 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.078 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:44.078 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.078 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:44.337 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.337 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:44.337 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.337 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.597 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.856 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.856 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:44.856 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.856 06:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.115 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.115 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:45.373 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:45.373 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:45.631 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:45.889 06:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:46.825 06:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:46.825 06:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:46.825 06:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.825 06:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.084 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:47.343 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.343 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:47.343 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.343 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:47.602 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.602 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:47.602 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.602 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:47.861 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:47.861 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:47.861 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:47.861 06:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.119 06:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.119 06:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:48.119 06:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:48.378 06:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:48.378 06:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.755 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:50.014 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.014 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:50.014 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.014 06:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.273 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:50.531 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.531 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:50.531 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:50.531 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.789 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.789 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:50.789 06:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:51.047 06:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:51.305 06:24:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:52.242 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:52.242 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:52.242 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.242 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:52.501 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.501 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:52.501 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.501 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.760 06:24:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.019 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.019 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:53.019 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.019 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:53.277 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.277 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:53.277 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.277 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:53.535 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.535 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:53.535 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:53.794 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:53.794 06:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:55.182 06:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:55.182 06:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:55.182 06:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.182 06:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:55.182 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.182 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:55.182 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.182 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.441 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:55.699 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.699 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:55.699 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.699 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:55.958 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.958 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:55.958 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.958 06:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1159624 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159624 ']' 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159624 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159624 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159624' 00:33:56.217 killing process with pid 1159624 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159624 00:33:56.217 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159624 00:33:56.217 { 00:33:56.217 "results": [ 00:33:56.217 { 00:33:56.217 "job": "Nvme0n1", 00:33:56.217 "core_mask": "0x4", 00:33:56.217 "workload": "verify", 00:33:56.217 "status": "terminated", 00:33:56.217 "verify_range": { 00:33:56.217 "start": 0, 00:33:56.217 "length": 16384 00:33:56.217 }, 00:33:56.217 "queue_depth": 128, 00:33:56.217 "io_size": 4096, 00:33:56.217 "runtime": 28.916969, 00:33:56.217 "iops": 10718.931157688068, 00:33:56.217 "mibps": 41.870824834719016, 00:33:56.217 "io_failed": 0, 00:33:56.217 "io_timeout": 0, 00:33:56.217 "avg_latency_us": 11922.188256234811, 00:33:56.217 "min_latency_us": 1185.8895238095238, 00:33:56.217 "max_latency_us": 3019898.88 00:33:56.217 } 00:33:56.217 ], 00:33:56.217 "core_count": 1 00:33:56.217 } 00:33:56.491 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1159624 00:33:56.491 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.491 [2024-12-15 06:23:46.013365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:56.491 [2024-12-15 06:23:46.013417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159624 ] 00:33:56.491 [2024-12-15 06:23:46.086267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.491 [2024-12-15 06:23:46.108456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.491 Running I/O for 90 seconds... 00:33:56.491 11657.00 IOPS, 45.54 MiB/s [2024-12-15T05:24:16.631Z] 11597.00 IOPS, 45.30 MiB/s [2024-12-15T05:24:16.631Z] 11558.33 IOPS, 45.15 MiB/s [2024-12-15T05:24:16.631Z] 11632.25 IOPS, 45.44 MiB/s [2024-12-15T05:24:16.631Z] 11655.40 IOPS, 45.53 MiB/s [2024-12-15T05:24:16.631Z] 11629.00 IOPS, 45.43 MiB/s [2024-12-15T05:24:16.631Z] 11588.71 IOPS, 45.27 MiB/s [2024-12-15T05:24:16.631Z] 11581.62 IOPS, 45.24 MiB/s [2024-12-15T05:24:16.631Z] 11572.33 IOPS, 45.20 MiB/s [2024-12-15T05:24:16.631Z] 11586.70 IOPS, 45.26 MiB/s [2024-12-15T05:24:16.631Z] 11588.55 IOPS, 45.27 MiB/s [2024-12-15T05:24:16.631Z] 11579.83 IOPS, 45.23 MiB/s [2024-12-15T05:24:16.631Z] [2024-12-15 06:24:00.082452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.491 [2024-12-15 06:24:00.082659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.491 [2024-12-15 06:24:00.082979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.491 [2024-12-15 06:24:00.082996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.492 [2024-12-15 06:24:00.083042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.492 [2024-12-15 06:24:00.083061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.492 [2024-12-15 06:24:00.083408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.083984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.492 [2024-12-15 06:24:00.084168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.492 [2024-12-15 06:24:00.084175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.084972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.493 [2024-12-15 06:24:00.085213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.493 [2024-12-15 06:24:00.085220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:00.085765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.494 [2024-12-15 06:24:00.085771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.494 11415.62 IOPS, 44.59 MiB/s [2024-12-15T05:24:16.634Z] 10600.21 IOPS, 41.41 MiB/s [2024-12-15T05:24:16.634Z] 9893.53 IOPS, 38.65 MiB/s [2024-12-15T05:24:16.634Z] 9404.88 IOPS, 36.74 MiB/s [2024-12-15T05:24:16.634Z] 9528.71 IOPS, 37.22 MiB/s [2024-12-15T05:24:16.634Z] 9643.83 IOPS, 37.67 MiB/s [2024-12-15T05:24:16.634Z] 9825.68 IOPS, 38.38 MiB/s [2024-12-15T05:24:16.634Z] 9996.10 IOPS, 39.05 MiB/s [2024-12-15T05:24:16.634Z] 10171.24 IOPS, 39.73 MiB/s [2024-12-15T05:24:16.634Z] 10238.55 IOPS, 39.99 MiB/s [2024-12-15T05:24:16.634Z] 10286.09 IOPS, 40.18 MiB/s [2024-12-15T05:24:16.634Z] 10347.46 IOPS, 40.42 MiB/s [2024-12-15T05:24:16.634Z] 10474.08 IOPS, 40.91 MiB/s [2024-12-15T05:24:16.634Z] 10593.23 IOPS, 41.38 MiB/s [2024-12-15T05:24:16.634Z] [2024-12-15 06:24:13.892228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.494 [2024-12-15 06:24:13.892526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.494 [2024-12-15 06:24:13.892539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.495 [2024-12-15 06:24:13.892748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.892976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.892988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.495 [2024-12-15 06:24:13.893722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.495 [2024-12-15 06:24:13.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.893867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.893874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.894962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.895430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.496 [2024-12-15 06:24:13.895562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.496 [2024-12-15 06:24:13.895877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.496 [2024-12-15 06:24:13.895891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.895899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.895911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.895919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.895932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.895939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.895959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.895972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.895978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.895990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.896178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.896304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.896311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.897161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.897180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.897199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.497 [2024-12-15 06:24:13.897221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.497 [2024-12-15 06:24:13.897469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.497 [2024-12-15 06:24:13.897481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.897544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.897594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.898736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.898755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.898774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.898984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.898998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.899015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.498 [2024-12-15 06:24:13.899022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.899034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.899041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.899053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.899060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.899072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.899079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.498 [2024-12-15 06:24:13.899091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.498 [2024-12-15 06:24:13.899097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.899117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.899135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.899155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.899267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.899307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.899532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.899538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.900784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.900803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.499 [2024-12-15 06:24:13.900824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.900843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.900861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.900874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.900880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.499 [2024-12-15 06:24:13.902211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.499 [2024-12-15 06:24:13.902229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.902407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.902663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.902669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.910825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.910835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.910854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.910866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.910873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.910885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.910892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.500 [2024-12-15 06:24:13.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.500 [2024-12-15 06:24:13.911425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.500 [2024-12-15 06:24:13.911432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.911444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.911462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.911469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.911481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.911488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.911499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.911506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.911518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.911525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.501 [2024-12-15 06:24:13.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.913695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.913701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.501 [2024-12-15 06:24:13.915430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.501 [2024-12-15 06:24:13.915450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.915964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.915980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.916021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.916047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.916072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.916216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.917437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.917465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.502 [2024-12-15 06:24:13.917491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.502 [2024-12-15 06:24:13.917643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.502 [2024-12-15 06:24:13.917666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.917725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.917750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.917776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.917852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.917969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.917979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.918013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.918038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.918090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.918166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.918191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.918258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.918267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.919880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.919905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.919925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.919935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.919952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.919960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.919977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.919986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.503 [2024-12-15 06:24:13.920196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.503 [2024-12-15 06:24:13.920289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.503 [2024-12-15 06:24:13.920298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.920324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.920349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.920374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.920399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.920425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.920442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.920451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.921797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.921852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.921969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.921978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.504 [2024-12-15 06:24:13.922444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.504 [2024-12-15 06:24:13.922469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.504 [2024-12-15 06:24:13.922485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.922494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.922511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.922519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.922536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.922545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.922561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.922570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.924888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.924982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.924999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.925007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.925019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.925025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.925045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.925944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.925974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.925981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.926009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.926027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.926050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.505 [2024-12-15 06:24:13.926069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.926088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.926106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.505 [2024-12-15 06:24:13.926119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.505 [2024-12-15 06:24:13.926125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.926578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.926628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.926635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.927144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.927157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.928364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.928386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.928405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.928424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.506 [2024-12-15 06:24:13.928558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.506 [2024-12-15 06:24:13.928570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.506 [2024-12-15 06:24:13.928577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.928821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.928870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.928877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.929896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.929935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.929947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.929954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.929967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.929973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.929985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.929998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.507 [2024-12-15 06:24:13.930283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.507 [2024-12-15 06:24:13.930351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.507 [2024-12-15 06:24:13.930358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.930566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.930635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.932194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.932215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.932341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.932348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.508 [2024-12-15 06:24:13.933163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.508 [2024-12-15 06:24:13.933357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.508 [2024-12-15 06:24:13.933369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.933812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.933937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.933944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.934468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.934489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.934508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.934527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.934546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.934565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.934577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.509 [2024-12-15 06:24:13.934584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.936184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.936201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.936219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.509 [2024-12-15 06:24:13.936226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.509 [2024-12-15 06:24:13.936239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.936782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.936831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.936838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.937310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.510 [2024-12-15 06:24:13.937390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.510 [2024-12-15 06:24:13.937447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.510 [2024-12-15 06:24:13.937458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.937655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.937705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.937711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.938962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.938981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.938999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.939006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.939027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.939046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.939065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.939083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.939102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.511 [2024-12-15 06:24:13.939121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.511 [2024-12-15 06:24:13.939139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:56.511 [2024-12-15 06:24:13.939151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.939158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.939170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.939177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.939189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.939196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.940924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.940977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.940983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.941007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.941026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.941044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.941063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.512 [2024-12-15 06:24:13.941082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.512 [2024-12-15 06:24:13.941101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:56.512 [2024-12-15 06:24:13.941113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.941120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.941132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.941138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.941151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.941157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.941169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.941176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.941188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.941196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.942974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.942986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.942999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.513 [2024-12-15 06:24:13.943262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:56.513 [2024-12-15 06:24:13.943293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.513 [2024-12-15 06:24:13.943300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.943312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.943319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.944983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.944990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.514 [2024-12-15 06:24:13.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:56.514 [2024-12-15 06:24:13.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.514 [2024-12-15 06:24:13.945681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.515 [2024-12-15 06:24:13.945705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.515 [2024-12-15 06:24:13.945817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.515 [2024-12-15 06:24:13.945836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.515 [2024-12-15 06:24:13.945855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:56.515 [2024-12-15 06:24:13.945867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.515 [2024-12-15 06:24:13.945874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:56.515 10661.26 IOPS, 41.65 MiB/s [2024-12-15T05:24:16.655Z] 10692.46 IOPS, 41.77 MiB/s [2024-12-15T05:24:16.655Z] Received shutdown signal, test time was about 28.917607 seconds 00:33:56.515 00:33:56.515 Latency(us) 00:33:56.515 [2024-12-15T05:24:16.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.515 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:56.515 Verification LBA range: start 0x0 length 0x4000 00:33:56.515 Nvme0n1 : 28.92 10718.93 41.87 0.00 0.00 11922.19 1185.89 3019898.88 00:33:56.515 [2024-12-15T05:24:16.655Z] =================================================================================================================== 00:33:56.515 [2024-12-15T05:24:16.655Z] Total : 10718.93 41.87 0.00 0.00 11922.19 1185.89 3019898.88 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.515 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.515 rmmod nvme_tcp 00:33:56.515 rmmod nvme_fabrics 00:33:56.774 rmmod nvme_keyring 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1159378 ']' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159378 ']' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159378' 00:33:56.774 killing process with pid 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159378 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.774 06:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.310 00:33:59.310 real 0m40.563s 00:33:59.310 user 1m50.060s 00:33:59.310 sys 0m11.640s 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:59.310 ************************************ 00:33:59.310 END TEST nvmf_host_multipath_status 00:33:59.310 ************************************ 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.310 06:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.310 ************************************ 00:33:59.310 START TEST nvmf_discovery_remove_ifc 00:33:59.310 ************************************ 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:59.310 * Looking for test storage... 00:33:59.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.310 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:59.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.311 --rc genhtml_branch_coverage=1 00:33:59.311 --rc genhtml_function_coverage=1 00:33:59.311 --rc genhtml_legend=1 00:33:59.311 --rc geninfo_all_blocks=1 00:33:59.311 --rc geninfo_unexecuted_blocks=1 00:33:59.311 00:33:59.311 ' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:59.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.311 --rc genhtml_branch_coverage=1 00:33:59.311 --rc genhtml_function_coverage=1 00:33:59.311 --rc genhtml_legend=1 00:33:59.311 --rc geninfo_all_blocks=1 00:33:59.311 --rc geninfo_unexecuted_blocks=1 00:33:59.311 00:33:59.311 ' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:59.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.311 --rc genhtml_branch_coverage=1 00:33:59.311 --rc genhtml_function_coverage=1 00:33:59.311 --rc genhtml_legend=1 00:33:59.311 --rc geninfo_all_blocks=1 00:33:59.311 --rc geninfo_unexecuted_blocks=1 00:33:59.311 00:33:59.311 ' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:59.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.311 --rc genhtml_branch_coverage=1 00:33:59.311 --rc genhtml_function_coverage=1 00:33:59.311 --rc genhtml_legend=1 00:33:59.311 --rc geninfo_all_blocks=1 00:33:59.311 --rc geninfo_unexecuted_blocks=1 00:33:59.311 00:33:59.311 ' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:59.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:59.311 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.312 06:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.880 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:05.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:05.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:05.881 Found net devices under 0000:af:00.0: cvl_0_0 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:05.881 Found net devices under 0000:af:00.1: cvl_0_1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:05.881 06:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:05.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:34:05.881 00:34:05.881 --- 10.0.0.2 ping statistics --- 00:34:05.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.881 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:05.881 00:34:05.881 --- 10.0.0.1 ping statistics --- 00:34:05.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.881 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1168499 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1168499 00:34:05.881 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168499 ']' 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 [2024-12-15 06:24:25.182474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:05.882 [2024-12-15 06:24:25.182520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:05.882 [2024-12-15 06:24:25.259945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.882 [2024-12-15 06:24:25.280688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.882 [2024-12-15 06:24:25.280723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.882 [2024-12-15 06:24:25.280731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.882 [2024-12-15 06:24:25.280737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.882 [2024-12-15 06:24:25.280742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.882 [2024-12-15 06:24:25.281255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 [2024-12-15 06:24:25.419214] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.882 [2024-12-15 06:24:25.427390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:05.882 null0 00:34:05.882 [2024-12-15 06:24:25.459376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1168666 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1168666 /tmp/host.sock 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168666 ']' 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:05.882 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 [2024-12-15 06:24:25.528483] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:05.882 [2024-12-15 06:24:25.528525] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168666 ] 00:34:05.882 [2024-12-15 06:24:25.601605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.882 [2024-12-15 06:24:25.624442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.882 06:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.819 [2024-12-15 06:24:26.816474] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:06.819 [2024-12-15 06:24:26.816497] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:06.819 [2024-12-15 06:24:26.816509] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:06.819 [2024-12-15 06:24:26.942883] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:07.078 [2024-12-15 06:24:27.118843] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:07.078 [2024-12-15 06:24:27.119606] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18d3710:1 started. 00:34:07.078 [2024-12-15 06:24:27.120923] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:07.078 [2024-12-15 06:24:27.120964] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:07.078 [2024-12-15 06:24:27.120982] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:07.078 [2024-12-15 06:24:27.121000] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:07.078 [2024-12-15 06:24:27.121018] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.078 [2024-12-15 06:24:27.124825] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18d3710 was disconnected and freed. delete nvme_qpair. 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:07.078 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:07.337 06:24:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:08.273 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.532 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:08.532 06:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:09.469 06:24:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:10.403 06:24:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:11.780 06:24:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:12.724 [2024-12-15 06:24:32.562486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:12.724 [2024-12-15 06:24:32.562529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:12.724 [2024-12-15 06:24:32.562540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:12.724 [2024-12-15 06:24:32.562551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:12.724 [2024-12-15 06:24:32.562558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:12.724 [2024-12-15 06:24:32.562565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:12.724 [2024-12-15 06:24:32.562571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:12.724 [2024-12-15 06:24:32.562578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:12.724 [2024-12-15 06:24:32.562585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:12.724 [2024-12-15 06:24:32.562592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:12.724 [2024-12-15 06:24:32.562599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:12.724 [2024-12-15 06:24:32.562605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18afec0 is same with the state(6) to be set 00:34:12.724 [2024-12-15 06:24:32.572507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18afec0 (9): Bad file descriptor 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.724 06:24:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.724 [2024-12-15 06:24:32.582544] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:12.724 [2024-12-15 06:24:32.582558] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:12.724 [2024-12-15 06:24:32.582565] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:12.724 [2024-12-15 06:24:32.582570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:12.724 [2024-12-15 06:24:32.582589] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:13.660 [2024-12-15 06:24:33.622045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:13.660 [2024-12-15 06:24:33.622127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18afec0 with addr=10.0.0.2, port=4420 00:34:13.660 [2024-12-15 06:24:33.622163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18afec0 is same with the state(6) to be set 00:34:13.660 [2024-12-15 06:24:33.622215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18afec0 (9): Bad file descriptor 00:34:13.660 [2024-12-15 06:24:33.622343] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:13.660 [2024-12-15 06:24:33.622398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:13.660 [2024-12-15 06:24:33.622420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:13.660 [2024-12-15 06:24:33.622444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:13.660 [2024-12-15 06:24:33.622465] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:13.660 [2024-12-15 06:24:33.622481] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:13.660 [2024-12-15 06:24:33.622495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:13.660 [2024-12-15 06:24:33.622516] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:13.660 [2024-12-15 06:24:33.622531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:13.660 06:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.660 06:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.660 06:24:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.594 [2024-12-15 06:24:34.625023] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:14.595 [2024-12-15 06:24:34.625043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:14.595 [2024-12-15 06:24:34.625054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:14.595 [2024-12-15 06:24:34.625061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:14.595 [2024-12-15 06:24:34.625068] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:14.595 [2024-12-15 06:24:34.625096] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:14.595 [2024-12-15 06:24:34.625101] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:14.595 [2024-12-15 06:24:34.625105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:14.595 [2024-12-15 06:24:34.625125] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:14.595 [2024-12-15 06:24:34.625147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.595 [2024-12-15 06:24:34.625156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.595 [2024-12-15 06:24:34.625167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.595 [2024-12-15 06:24:34.625173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.595 [2024-12-15 06:24:34.625181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.595 [2024-12-15 06:24:34.625188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.595 [2024-12-15 06:24:34.625194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.595 [2024-12-15 06:24:34.625201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.595 [2024-12-15 06:24:34.625209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.595 [2024-12-15 06:24:34.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.595 [2024-12-15 06:24:34.625222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:14.595 [2024-12-15 06:24:34.625492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189f5e0 (9): Bad file descriptor 00:34:14.595 [2024-12-15 06:24:34.626504] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:14.595 [2024-12-15 06:24:34.626514] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.595 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.853 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:14.854 06:24:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:15.789 06:24:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.726 [2024-12-15 06:24:36.640357] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:16.726 [2024-12-15 06:24:36.640374] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:16.726 [2024-12-15 06:24:36.640386] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:16.726 [2024-12-15 06:24:36.766759] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:16.726 [2024-12-15 06:24:36.861330] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:16.726 [2024-12-15 06:24:36.861851] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x18aacd0:1 started. 00:34:16.726 [2024-12-15 06:24:36.862878] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:16.726 [2024-12-15 06:24:36.862910] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:16.726 [2024-12-15 06:24:36.862926] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:16.726 [2024-12-15 06:24:36.862939] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:16.726 [2024-12-15 06:24:36.862947] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:16.985 [2024-12-15 06:24:36.869610] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x18aacd0 was disconnected and freed. delete nvme_qpair. 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1168666 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168666 ']' 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168666 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168666 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168666' 00:34:16.985 killing process with pid 1168666 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168666 00:34:16.985 06:24:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168666 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.985 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.244 rmmod nvme_tcp 00:34:17.244 rmmod nvme_fabrics 00:34:17.244 rmmod nvme_keyring 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1168499 ']' 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168499 ']' 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168499' 00:34:17.244 killing process with pid 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168499 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.244 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.503 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.503 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.503 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.503 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.503 06:24:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.458 00:34:19.458 real 0m20.416s 00:34:19.458 user 0m24.671s 00:34:19.458 sys 0m5.758s 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.458 ************************************ 00:34:19.458 END TEST nvmf_discovery_remove_ifc 00:34:19.458 ************************************ 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.458 ************************************ 00:34:19.458 START TEST nvmf_identify_kernel_target 00:34:19.458 ************************************ 00:34:19.458 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:19.718 * Looking for test storage... 00:34:19.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.718 --rc genhtml_branch_coverage=1 00:34:19.718 --rc genhtml_function_coverage=1 00:34:19.718 --rc genhtml_legend=1 00:34:19.718 --rc geninfo_all_blocks=1 00:34:19.718 --rc geninfo_unexecuted_blocks=1 00:34:19.718 00:34:19.718 ' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.718 --rc genhtml_branch_coverage=1 00:34:19.718 --rc genhtml_function_coverage=1 00:34:19.718 --rc genhtml_legend=1 00:34:19.718 --rc geninfo_all_blocks=1 00:34:19.718 --rc geninfo_unexecuted_blocks=1 00:34:19.718 00:34:19.718 ' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.718 --rc genhtml_branch_coverage=1 00:34:19.718 --rc genhtml_function_coverage=1 00:34:19.718 --rc genhtml_legend=1 00:34:19.718 --rc geninfo_all_blocks=1 00:34:19.718 --rc geninfo_unexecuted_blocks=1 00:34:19.718 00:34:19.718 ' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:19.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.718 --rc genhtml_branch_coverage=1 00:34:19.718 --rc genhtml_function_coverage=1 00:34:19.718 --rc genhtml_legend=1 00:34:19.718 --rc geninfo_all_blocks=1 00:34:19.718 --rc geninfo_unexecuted_blocks=1 00:34:19.718 00:34:19.718 ' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.718 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.719 06:24:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.288 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:26.289 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:26.289 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:26.289 Found net devices under 0000:af:00.0: cvl_0_0 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:26.289 Found net devices under 0000:af:00.1: cvl_0_1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:34:26.289 00:34:26.289 --- 10.0.0.2 ping statistics --- 00:34:26.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.289 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:34:26.289 00:34:26.289 --- 10.0.0.1 ping statistics --- 00:34:26.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.289 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.289 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:26.290 06:24:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:28.196 Waiting for block devices as requested 00:34:28.455 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:28.455 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:28.455 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:28.714 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:28.714 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:28.714 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:28.972 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:28.972 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:28.972 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:29.231 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:29.231 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:29.231 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:29.231 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:29.490 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:29.490 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:29.490 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:29.750 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:29.750 No valid GPT data, bailing 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:29.750 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:30.010 00:34:30.010 Discovery Log Number of Records 2, Generation counter 2 00:34:30.010 =====Discovery Log Entry 0====== 00:34:30.010 trtype: tcp 00:34:30.010 adrfam: ipv4 00:34:30.010 subtype: current discovery subsystem 00:34:30.010 treq: not specified, sq flow control disable supported 00:34:30.010 portid: 1 00:34:30.010 trsvcid: 4420 00:34:30.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:30.010 traddr: 10.0.0.1 00:34:30.010 eflags: none 00:34:30.010 sectype: none 00:34:30.010 =====Discovery Log Entry 1====== 00:34:30.010 trtype: tcp 00:34:30.010 adrfam: ipv4 00:34:30.010 subtype: nvme subsystem 00:34:30.010 treq: not specified, sq flow control disable supported 00:34:30.010 portid: 1 00:34:30.010 trsvcid: 4420 00:34:30.010 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:30.010 traddr: 10.0.0.1 00:34:30.010 eflags: none 00:34:30.010 sectype: none 00:34:30.010 06:24:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:30.010 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:30.010 ===================================================== 00:34:30.010 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:30.010 ===================================================== 00:34:30.010 Controller Capabilities/Features 00:34:30.010 ================================ 00:34:30.010 Vendor ID: 0000 00:34:30.010 Subsystem Vendor ID: 0000 00:34:30.010 Serial Number: 1a76ce478fd1990b372a 00:34:30.010 Model Number: Linux 00:34:30.010 Firmware Version: 6.8.9-20 00:34:30.010 Recommended Arb Burst: 0 00:34:30.010 IEEE OUI Identifier: 00 00 00 00:34:30.010 Multi-path I/O 00:34:30.010 May have multiple subsystem ports: No 00:34:30.010 May have multiple controllers: No 00:34:30.010 Associated with SR-IOV VF: No 00:34:30.010 Max Data Transfer Size: Unlimited 00:34:30.010 Max Number of Namespaces: 0 00:34:30.010 Max Number of I/O Queues: 1024 00:34:30.010 NVMe Specification Version (VS): 1.3 00:34:30.010 NVMe Specification Version (Identify): 1.3 00:34:30.010 Maximum Queue Entries: 1024 00:34:30.010 Contiguous Queues Required: No 00:34:30.010 Arbitration Mechanisms Supported 00:34:30.010 Weighted Round Robin: Not Supported 00:34:30.010 Vendor Specific: Not Supported 00:34:30.010 Reset Timeout: 7500 ms 00:34:30.010 Doorbell Stride: 4 bytes 00:34:30.010 NVM Subsystem Reset: Not Supported 00:34:30.010 Command Sets Supported 00:34:30.010 NVM Command Set: Supported 00:34:30.010 Boot Partition: Not Supported 00:34:30.010 Memory Page Size Minimum: 4096 bytes 00:34:30.010 Memory Page Size Maximum: 4096 bytes 00:34:30.010 Persistent Memory Region: Not Supported 00:34:30.010 Optional Asynchronous Events Supported 00:34:30.010 Namespace Attribute Notices: Not Supported 00:34:30.010 Firmware Activation Notices: Not Supported 00:34:30.010 ANA Change Notices: Not Supported 00:34:30.010 PLE Aggregate Log Change Notices: Not Supported 00:34:30.010 LBA Status Info Alert Notices: Not Supported 00:34:30.010 EGE Aggregate Log Change Notices: Not Supported 00:34:30.010 Normal NVM Subsystem Shutdown event: Not Supported 00:34:30.011 Zone Descriptor Change Notices: Not Supported 00:34:30.011 Discovery Log Change Notices: Supported 00:34:30.011 Controller Attributes 00:34:30.011 128-bit Host Identifier: Not Supported 00:34:30.011 Non-Operational Permissive Mode: Not Supported 00:34:30.011 NVM Sets: Not Supported 00:34:30.011 Read Recovery Levels: Not Supported 00:34:30.011 Endurance Groups: Not Supported 00:34:30.011 Predictable Latency Mode: Not Supported 00:34:30.011 Traffic Based Keep ALive: Not Supported 00:34:30.011 Namespace Granularity: Not Supported 00:34:30.011 SQ Associations: Not Supported 00:34:30.011 UUID List: Not Supported 00:34:30.011 Multi-Domain Subsystem: Not Supported 00:34:30.011 Fixed Capacity Management: Not Supported 00:34:30.011 Variable Capacity Management: Not Supported 00:34:30.011 Delete Endurance Group: Not Supported 00:34:30.011 Delete NVM Set: Not Supported 00:34:30.011 Extended LBA Formats Supported: Not Supported 00:34:30.011 Flexible Data Placement Supported: Not Supported 00:34:30.011 00:34:30.011 Controller Memory Buffer Support 00:34:30.011 ================================ 00:34:30.011 Supported: No 00:34:30.011 00:34:30.011 Persistent Memory Region Support 00:34:30.011 ================================ 00:34:30.011 Supported: No 00:34:30.011 00:34:30.011 Admin Command Set Attributes 00:34:30.011 ============================ 00:34:30.011 Security Send/Receive: Not Supported 00:34:30.011 Format NVM: Not Supported 00:34:30.011 Firmware Activate/Download: Not Supported 00:34:30.011 Namespace Management: Not Supported 00:34:30.011 Device Self-Test: Not Supported 00:34:30.011 Directives: Not Supported 00:34:30.011 NVMe-MI: Not Supported 00:34:30.011 Virtualization Management: Not Supported 00:34:30.011 Doorbell Buffer Config: Not Supported 00:34:30.011 Get LBA Status Capability: Not Supported 00:34:30.011 Command & Feature Lockdown Capability: Not Supported 00:34:30.011 Abort Command Limit: 1 00:34:30.011 Async Event Request Limit: 1 00:34:30.011 Number of Firmware Slots: N/A 00:34:30.011 Firmware Slot 1 Read-Only: N/A 00:34:30.011 Firmware Activation Without Reset: N/A 00:34:30.011 Multiple Update Detection Support: N/A 00:34:30.011 Firmware Update Granularity: No Information Provided 00:34:30.011 Per-Namespace SMART Log: No 00:34:30.011 Asymmetric Namespace Access Log Page: Not Supported 00:34:30.011 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:30.011 Command Effects Log Page: Not Supported 00:34:30.011 Get Log Page Extended Data: Supported 00:34:30.011 Telemetry Log Pages: Not Supported 00:34:30.011 Persistent Event Log Pages: Not Supported 00:34:30.011 Supported Log Pages Log Page: May Support 00:34:30.011 Commands Supported & Effects Log Page: Not Supported 00:34:30.011 Feature Identifiers & Effects Log Page:May Support 00:34:30.011 NVMe-MI Commands & Effects Log Page: May Support 00:34:30.011 Data Area 4 for Telemetry Log: Not Supported 00:34:30.011 Error Log Page Entries Supported: 1 00:34:30.011 Keep Alive: Not Supported 00:34:30.011 00:34:30.011 NVM Command Set Attributes 00:34:30.011 ========================== 00:34:30.011 Submission Queue Entry Size 00:34:30.011 Max: 1 00:34:30.011 Min: 1 00:34:30.011 Completion Queue Entry Size 00:34:30.011 Max: 1 00:34:30.011 Min: 1 00:34:30.011 Number of Namespaces: 0 00:34:30.011 Compare Command: Not Supported 00:34:30.011 Write Uncorrectable Command: Not Supported 00:34:30.011 Dataset Management Command: Not Supported 00:34:30.011 Write Zeroes Command: Not Supported 00:34:30.011 Set Features Save Field: Not Supported 00:34:30.011 Reservations: Not Supported 00:34:30.011 Timestamp: Not Supported 00:34:30.011 Copy: Not Supported 00:34:30.011 Volatile Write Cache: Not Present 00:34:30.011 Atomic Write Unit (Normal): 1 00:34:30.011 Atomic Write Unit (PFail): 1 00:34:30.011 Atomic Compare & Write Unit: 1 00:34:30.011 Fused Compare & Write: Not Supported 00:34:30.011 Scatter-Gather List 00:34:30.011 SGL Command Set: Supported 00:34:30.011 SGL Keyed: Not Supported 00:34:30.011 SGL Bit Bucket Descriptor: Not Supported 00:34:30.011 SGL Metadata Pointer: Not Supported 00:34:30.011 Oversized SGL: Not Supported 00:34:30.011 SGL Metadata Address: Not Supported 00:34:30.011 SGL Offset: Supported 00:34:30.011 Transport SGL Data Block: Not Supported 00:34:30.011 Replay Protected Memory Block: Not Supported 00:34:30.011 00:34:30.011 Firmware Slot Information 00:34:30.011 ========================= 00:34:30.011 Active slot: 0 00:34:30.011 00:34:30.011 00:34:30.011 Error Log 00:34:30.011 ========= 00:34:30.011 00:34:30.011 Active Namespaces 00:34:30.011 ================= 00:34:30.011 Discovery Log Page 00:34:30.011 ================== 00:34:30.011 Generation Counter: 2 00:34:30.011 Number of Records: 2 00:34:30.011 Record Format: 0 00:34:30.011 00:34:30.011 Discovery Log Entry 0 00:34:30.011 ---------------------- 00:34:30.011 Transport Type: 3 (TCP) 00:34:30.011 Address Family: 1 (IPv4) 00:34:30.011 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:30.011 Entry Flags: 00:34:30.011 Duplicate Returned Information: 0 00:34:30.011 Explicit Persistent Connection Support for Discovery: 0 00:34:30.011 Transport Requirements: 00:34:30.011 Secure Channel: Not Specified 00:34:30.011 Port ID: 1 (0x0001) 00:34:30.011 Controller ID: 65535 (0xffff) 00:34:30.011 Admin Max SQ Size: 32 00:34:30.011 Transport Service Identifier: 4420 00:34:30.011 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:30.011 Transport Address: 10.0.0.1 00:34:30.011 Discovery Log Entry 1 00:34:30.011 ---------------------- 00:34:30.011 Transport Type: 3 (TCP) 00:34:30.011 Address Family: 1 (IPv4) 00:34:30.011 Subsystem Type: 2 (NVM Subsystem) 00:34:30.011 Entry Flags: 00:34:30.011 Duplicate Returned Information: 0 00:34:30.011 Explicit Persistent Connection Support for Discovery: 0 00:34:30.011 Transport Requirements: 00:34:30.011 Secure Channel: Not Specified 00:34:30.011 Port ID: 1 (0x0001) 00:34:30.011 Controller ID: 65535 (0xffff) 00:34:30.011 Admin Max SQ Size: 32 00:34:30.011 Transport Service Identifier: 4420 00:34:30.011 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:30.011 Transport Address: 10.0.0.1 00:34:30.011 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.011 get_feature(0x01) failed 00:34:30.011 get_feature(0x02) failed 00:34:30.011 get_feature(0x04) failed 00:34:30.011 ===================================================== 00:34:30.011 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:30.011 ===================================================== 00:34:30.011 Controller Capabilities/Features 00:34:30.011 ================================ 00:34:30.011 Vendor ID: 0000 00:34:30.011 Subsystem Vendor ID: 0000 00:34:30.011 Serial Number: e5940431fec52098f0a7 00:34:30.011 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:30.011 Firmware Version: 6.8.9-20 00:34:30.011 Recommended Arb Burst: 6 00:34:30.011 IEEE OUI Identifier: 00 00 00 00:34:30.011 Multi-path I/O 00:34:30.011 May have multiple subsystem ports: Yes 00:34:30.011 May have multiple controllers: Yes 00:34:30.011 Associated with SR-IOV VF: No 00:34:30.011 Max Data Transfer Size: Unlimited 00:34:30.011 Max Number of Namespaces: 1024 00:34:30.011 Max Number of I/O Queues: 128 00:34:30.011 NVMe Specification Version (VS): 1.3 00:34:30.011 NVMe Specification Version (Identify): 1.3 00:34:30.011 Maximum Queue Entries: 1024 00:34:30.011 Contiguous Queues Required: No 00:34:30.011 Arbitration Mechanisms Supported 00:34:30.011 Weighted Round Robin: Not Supported 00:34:30.011 Vendor Specific: Not Supported 00:34:30.011 Reset Timeout: 7500 ms 00:34:30.011 Doorbell Stride: 4 bytes 00:34:30.011 NVM Subsystem Reset: Not Supported 00:34:30.011 Command Sets Supported 00:34:30.011 NVM Command Set: Supported 00:34:30.011 Boot Partition: Not Supported 00:34:30.011 Memory Page Size Minimum: 4096 bytes 00:34:30.011 Memory Page Size Maximum: 4096 bytes 00:34:30.011 Persistent Memory Region: Not Supported 00:34:30.011 Optional Asynchronous Events Supported 00:34:30.011 Namespace Attribute Notices: Supported 00:34:30.011 Firmware Activation Notices: Not Supported 00:34:30.011 ANA Change Notices: Supported 00:34:30.011 PLE Aggregate Log Change Notices: Not Supported 00:34:30.011 LBA Status Info Alert Notices: Not Supported 00:34:30.011 EGE Aggregate Log Change Notices: Not Supported 00:34:30.011 Normal NVM Subsystem Shutdown event: Not Supported 00:34:30.011 Zone Descriptor Change Notices: Not Supported 00:34:30.011 Discovery Log Change Notices: Not Supported 00:34:30.011 Controller Attributes 00:34:30.011 128-bit Host Identifier: Supported 00:34:30.011 Non-Operational Permissive Mode: Not Supported 00:34:30.011 NVM Sets: Not Supported 00:34:30.011 Read Recovery Levels: Not Supported 00:34:30.011 Endurance Groups: Not Supported 00:34:30.011 Predictable Latency Mode: Not Supported 00:34:30.012 Traffic Based Keep ALive: Supported 00:34:30.012 Namespace Granularity: Not Supported 00:34:30.012 SQ Associations: Not Supported 00:34:30.012 UUID List: Not Supported 00:34:30.012 Multi-Domain Subsystem: Not Supported 00:34:30.012 Fixed Capacity Management: Not Supported 00:34:30.012 Variable Capacity Management: Not Supported 00:34:30.012 Delete Endurance Group: Not Supported 00:34:30.012 Delete NVM Set: Not Supported 00:34:30.012 Extended LBA Formats Supported: Not Supported 00:34:30.012 Flexible Data Placement Supported: Not Supported 00:34:30.012 00:34:30.012 Controller Memory Buffer Support 00:34:30.012 ================================ 00:34:30.012 Supported: No 00:34:30.012 00:34:30.012 Persistent Memory Region Support 00:34:30.012 ================================ 00:34:30.012 Supported: No 00:34:30.012 00:34:30.012 Admin Command Set Attributes 00:34:30.012 ============================ 00:34:30.012 Security Send/Receive: Not Supported 00:34:30.012 Format NVM: Not Supported 00:34:30.012 Firmware Activate/Download: Not Supported 00:34:30.012 Namespace Management: Not Supported 00:34:30.012 Device Self-Test: Not Supported 00:34:30.012 Directives: Not Supported 00:34:30.012 NVMe-MI: Not Supported 00:34:30.012 Virtualization Management: Not Supported 00:34:30.012 Doorbell Buffer Config: Not Supported 00:34:30.012 Get LBA Status Capability: Not Supported 00:34:30.012 Command & Feature Lockdown Capability: Not Supported 00:34:30.012 Abort Command Limit: 4 00:34:30.012 Async Event Request Limit: 4 00:34:30.012 Number of Firmware Slots: N/A 00:34:30.012 Firmware Slot 1 Read-Only: N/A 00:34:30.012 Firmware Activation Without Reset: N/A 00:34:30.012 Multiple Update Detection Support: N/A 00:34:30.012 Firmware Update Granularity: No Information Provided 00:34:30.012 Per-Namespace SMART Log: Yes 00:34:30.012 Asymmetric Namespace Access Log Page: Supported 00:34:30.012 ANA Transition Time : 10 sec 00:34:30.012 00:34:30.012 Asymmetric Namespace Access Capabilities 00:34:30.012 ANA Optimized State : Supported 00:34:30.012 ANA Non-Optimized State : Supported 00:34:30.012 ANA Inaccessible State : Supported 00:34:30.012 ANA Persistent Loss State : Supported 00:34:30.012 ANA Change State : Supported 00:34:30.012 ANAGRPID is not changed : No 00:34:30.012 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:30.012 00:34:30.012 ANA Group Identifier Maximum : 128 00:34:30.012 Number of ANA Group Identifiers : 128 00:34:30.012 Max Number of Allowed Namespaces : 1024 00:34:30.012 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:30.012 Command Effects Log Page: Supported 00:34:30.012 Get Log Page Extended Data: Supported 00:34:30.012 Telemetry Log Pages: Not Supported 00:34:30.012 Persistent Event Log Pages: Not Supported 00:34:30.012 Supported Log Pages Log Page: May Support 00:34:30.012 Commands Supported & Effects Log Page: Not Supported 00:34:30.012 Feature Identifiers & Effects Log Page:May Support 00:34:30.012 NVMe-MI Commands & Effects Log Page: May Support 00:34:30.012 Data Area 4 for Telemetry Log: Not Supported 00:34:30.012 Error Log Page Entries Supported: 128 00:34:30.012 Keep Alive: Supported 00:34:30.012 Keep Alive Granularity: 1000 ms 00:34:30.012 00:34:30.012 NVM Command Set Attributes 00:34:30.012 ========================== 00:34:30.012 Submission Queue Entry Size 00:34:30.012 Max: 64 00:34:30.012 Min: 64 00:34:30.012 Completion Queue Entry Size 00:34:30.012 Max: 16 00:34:30.012 Min: 16 00:34:30.012 Number of Namespaces: 1024 00:34:30.012 Compare Command: Not Supported 00:34:30.012 Write Uncorrectable Command: Not Supported 00:34:30.012 Dataset Management Command: Supported 00:34:30.012 Write Zeroes Command: Supported 00:34:30.012 Set Features Save Field: Not Supported 00:34:30.012 Reservations: Not Supported 00:34:30.012 Timestamp: Not Supported 00:34:30.012 Copy: Not Supported 00:34:30.012 Volatile Write Cache: Present 00:34:30.012 Atomic Write Unit (Normal): 1 00:34:30.012 Atomic Write Unit (PFail): 1 00:34:30.012 Atomic Compare & Write Unit: 1 00:34:30.012 Fused Compare & Write: Not Supported 00:34:30.012 Scatter-Gather List 00:34:30.012 SGL Command Set: Supported 00:34:30.012 SGL Keyed: Not Supported 00:34:30.012 SGL Bit Bucket Descriptor: Not Supported 00:34:30.012 SGL Metadata Pointer: Not Supported 00:34:30.012 Oversized SGL: Not Supported 00:34:30.012 SGL Metadata Address: Not Supported 00:34:30.012 SGL Offset: Supported 00:34:30.012 Transport SGL Data Block: Not Supported 00:34:30.012 Replay Protected Memory Block: Not Supported 00:34:30.012 00:34:30.012 Firmware Slot Information 00:34:30.012 ========================= 00:34:30.012 Active slot: 0 00:34:30.012 00:34:30.012 Asymmetric Namespace Access 00:34:30.012 =========================== 00:34:30.012 Change Count : 0 00:34:30.012 Number of ANA Group Descriptors : 1 00:34:30.012 ANA Group Descriptor : 0 00:34:30.012 ANA Group ID : 1 00:34:30.012 Number of NSID Values : 1 00:34:30.012 Change Count : 0 00:34:30.012 ANA State : 1 00:34:30.012 Namespace Identifier : 1 00:34:30.012 00:34:30.012 Commands Supported and Effects 00:34:30.012 ============================== 00:34:30.012 Admin Commands 00:34:30.012 -------------- 00:34:30.012 Get Log Page (02h): Supported 00:34:30.012 Identify (06h): Supported 00:34:30.012 Abort (08h): Supported 00:34:30.012 Set Features (09h): Supported 00:34:30.012 Get Features (0Ah): Supported 00:34:30.012 Asynchronous Event Request (0Ch): Supported 00:34:30.012 Keep Alive (18h): Supported 00:34:30.012 I/O Commands 00:34:30.012 ------------ 00:34:30.012 Flush (00h): Supported 00:34:30.012 Write (01h): Supported LBA-Change 00:34:30.012 Read (02h): Supported 00:34:30.012 Write Zeroes (08h): Supported LBA-Change 00:34:30.012 Dataset Management (09h): Supported 00:34:30.012 00:34:30.012 Error Log 00:34:30.012 ========= 00:34:30.012 Entry: 0 00:34:30.012 Error Count: 0x3 00:34:30.012 Submission Queue Id: 0x0 00:34:30.012 Command Id: 0x5 00:34:30.012 Phase Bit: 0 00:34:30.012 Status Code: 0x2 00:34:30.012 Status Code Type: 0x0 00:34:30.012 Do Not Retry: 1 00:34:30.012 Error Location: 0x28 00:34:30.012 LBA: 0x0 00:34:30.012 Namespace: 0x0 00:34:30.012 Vendor Log Page: 0x0 00:34:30.012 ----------- 00:34:30.012 Entry: 1 00:34:30.012 Error Count: 0x2 00:34:30.012 Submission Queue Id: 0x0 00:34:30.012 Command Id: 0x5 00:34:30.012 Phase Bit: 0 00:34:30.012 Status Code: 0x2 00:34:30.012 Status Code Type: 0x0 00:34:30.012 Do Not Retry: 1 00:34:30.012 Error Location: 0x28 00:34:30.012 LBA: 0x0 00:34:30.012 Namespace: 0x0 00:34:30.012 Vendor Log Page: 0x0 00:34:30.012 ----------- 00:34:30.012 Entry: 2 00:34:30.012 Error Count: 0x1 00:34:30.012 Submission Queue Id: 0x0 00:34:30.012 Command Id: 0x4 00:34:30.012 Phase Bit: 0 00:34:30.012 Status Code: 0x2 00:34:30.012 Status Code Type: 0x0 00:34:30.012 Do Not Retry: 1 00:34:30.012 Error Location: 0x28 00:34:30.012 LBA: 0x0 00:34:30.012 Namespace: 0x0 00:34:30.012 Vendor Log Page: 0x0 00:34:30.012 00:34:30.012 Number of Queues 00:34:30.012 ================ 00:34:30.012 Number of I/O Submission Queues: 128 00:34:30.012 Number of I/O Completion Queues: 128 00:34:30.012 00:34:30.012 ZNS Specific Controller Data 00:34:30.012 ============================ 00:34:30.012 Zone Append Size Limit: 0 00:34:30.012 00:34:30.012 00:34:30.012 Active Namespaces 00:34:30.012 ================= 00:34:30.012 get_feature(0x05) failed 00:34:30.012 Namespace ID:1 00:34:30.012 Command Set Identifier: NVM (00h) 00:34:30.012 Deallocate: Supported 00:34:30.012 Deallocated/Unwritten Error: Not Supported 00:34:30.012 Deallocated Read Value: Unknown 00:34:30.012 Deallocate in Write Zeroes: Not Supported 00:34:30.012 Deallocated Guard Field: 0xFFFF 00:34:30.012 Flush: Supported 00:34:30.012 Reservation: Not Supported 00:34:30.012 Namespace Sharing Capabilities: Multiple Controllers 00:34:30.012 Size (in LBAs): 1953525168 (931GiB) 00:34:30.012 Capacity (in LBAs): 1953525168 (931GiB) 00:34:30.012 Utilization (in LBAs): 1953525168 (931GiB) 00:34:30.012 UUID: dbd8dcfa-bbfb-45d9-a96f-a0bab98c253b 00:34:30.012 Thin Provisioning: Not Supported 00:34:30.012 Per-NS Atomic Units: Yes 00:34:30.012 Atomic Boundary Size (Normal): 0 00:34:30.012 Atomic Boundary Size (PFail): 0 00:34:30.012 Atomic Boundary Offset: 0 00:34:30.012 NGUID/EUI64 Never Reused: No 00:34:30.012 ANA group ID: 1 00:34:30.012 Namespace Write Protected: No 00:34:30.012 Number of LBA Formats: 1 00:34:30.012 Current LBA Format: LBA Format #00 00:34:30.012 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:30.012 00:34:30.012 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:30.012 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:30.012 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:30.013 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.013 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:30.013 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.013 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.013 rmmod nvme_tcp 00:34:30.272 rmmod nvme_fabrics 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.272 06:24:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:32.176 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:32.434 06:24:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:34.968 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:34.968 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:34.968 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:34.968 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:34.968 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.226 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:36.163 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:36.163 00:34:36.163 real 0m16.613s 00:34:36.163 user 0m4.404s 00:34:36.163 sys 0m8.633s 00:34:36.163 06:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.163 06:24:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.163 ************************************ 00:34:36.163 END TEST nvmf_identify_kernel_target 00:34:36.163 ************************************ 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.164 ************************************ 00:34:36.164 START TEST nvmf_auth_host 00:34:36.164 ************************************ 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:36.164 * Looking for test storage... 00:34:36.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:36.164 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.424 --rc genhtml_branch_coverage=1 00:34:36.424 --rc genhtml_function_coverage=1 00:34:36.424 --rc genhtml_legend=1 00:34:36.424 --rc geninfo_all_blocks=1 00:34:36.424 --rc geninfo_unexecuted_blocks=1 00:34:36.424 00:34:36.424 ' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.424 --rc genhtml_branch_coverage=1 00:34:36.424 --rc genhtml_function_coverage=1 00:34:36.424 --rc genhtml_legend=1 00:34:36.424 --rc geninfo_all_blocks=1 00:34:36.424 --rc geninfo_unexecuted_blocks=1 00:34:36.424 00:34:36.424 ' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.424 --rc genhtml_branch_coverage=1 00:34:36.424 --rc genhtml_function_coverage=1 00:34:36.424 --rc genhtml_legend=1 00:34:36.424 --rc geninfo_all_blocks=1 00:34:36.424 --rc geninfo_unexecuted_blocks=1 00:34:36.424 00:34:36.424 ' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.424 --rc genhtml_branch_coverage=1 00:34:36.424 --rc genhtml_function_coverage=1 00:34:36.424 --rc genhtml_legend=1 00:34:36.424 --rc geninfo_all_blocks=1 00:34:36.424 --rc geninfo_unexecuted_blocks=1 00:34:36.424 00:34:36.424 ' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:36.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:36.424 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:36.425 06:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:42.995 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:42.995 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:42.995 Found net devices under 0000:af:00.0: cvl_0_0 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:42.995 Found net devices under 0000:af:00.1: cvl_0_1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:42.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:34:42.995 00:34:42.995 --- 10.0.0.2 ping statistics --- 00:34:42.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.995 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:34:42.995 00:34:42.995 --- 10.0.0.1 ping statistics --- 00:34:42.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.995 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.995 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1180283 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1180283 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180283 ']' 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19db94f0156dd3b459010459611e75e7 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hYg 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19db94f0156dd3b459010459611e75e7 0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19db94f0156dd3b459010459611e75e7 0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19db94f0156dd3b459010459611e75e7 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hYg 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hYg 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hYg 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=42ea4aabcf11aa1e63e0661f7d2e19ee684d7d0628e6a5405469c90bf04f431c 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.R7B 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 42ea4aabcf11aa1e63e0661f7d2e19ee684d7d0628e6a5405469c90bf04f431c 3 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 42ea4aabcf11aa1e63e0661f7d2e19ee684d7d0628e6a5405469c90bf04f431c 3 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=42ea4aabcf11aa1e63e0661f7d2e19ee684d7d0628e6a5405469c90bf04f431c 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.R7B 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.R7B 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.R7B 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=32ee40437dff17d24555d485daff4d6a79ce88425dc8e7b2 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.e4v 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 32ee40437dff17d24555d485daff4d6a79ce88425dc8e7b2 0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 32ee40437dff17d24555d485daff4d6a79ce88425dc8e7b2 0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=32ee40437dff17d24555d485daff4d6a79ce88425dc8e7b2 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.e4v 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.e4v 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.e4v 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d9f56b80ce3a5a38f3683b2fe868861143220fb1ece5de94 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0tl 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d9f56b80ce3a5a38f3683b2fe868861143220fb1ece5de94 2 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d9f56b80ce3a5a38f3683b2fe868861143220fb1ece5de94 2 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d9f56b80ce3a5a38f3683b2fe868861143220fb1ece5de94 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0tl 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0tl 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0tl 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5df0ab0a56ac82d17ab321c7a05ec715 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Glx 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5df0ab0a56ac82d17ab321c7a05ec715 1 00:34:42.996 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5df0ab0a56ac82d17ab321c7a05ec715 1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5df0ab0a56ac82d17ab321c7a05ec715 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Glx 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Glx 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Glx 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81aa74dfad72b025cf51652e55d7829e 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HuH 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81aa74dfad72b025cf51652e55d7829e 1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81aa74dfad72b025cf51652e55d7829e 1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81aa74dfad72b025cf51652e55d7829e 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HuH 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HuH 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HuH 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1adb483bd5630ebb383d2e5d091aa98b461309781ca1919a 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:42.997 06:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hLP 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1adb483bd5630ebb383d2e5d091aa98b461309781ca1919a 2 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1adb483bd5630ebb383d2e5d091aa98b461309781ca1919a 2 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1adb483bd5630ebb383d2e5d091aa98b461309781ca1919a 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hLP 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hLP 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hLP 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be8b93723bf955048b25be0760e9eabc 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mFG 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be8b93723bf955048b25be0760e9eabc 0 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be8b93723bf955048b25be0760e9eabc 0 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be8b93723bf955048b25be0760e9eabc 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mFG 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mFG 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mFG 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a8dc77c8d43b18c2d231a4ba182eb47a823eb7a959dfd6e6dbfeaf6b24fd2872 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.UxV 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a8dc77c8d43b18c2d231a4ba182eb47a823eb7a959dfd6e6dbfeaf6b24fd2872 3 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a8dc77c8d43b18c2d231a4ba182eb47a823eb7a959dfd6e6dbfeaf6b24fd2872 3 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a8dc77c8d43b18c2d231a4ba182eb47a823eb7a959dfd6e6dbfeaf6b24fd2872 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:42.997 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.UxV 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.UxV 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UxV 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1180283 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180283 ']' 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hYg 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.R7B ]] 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R7B 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.e4v 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0tl ]] 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0tl 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.256 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Glx 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HuH ]] 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HuH 00:34:43.515 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hLP 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mFG ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mFG 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UxV 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:43.516 06:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:46.047 Waiting for block devices as requested 00:34:46.047 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:46.306 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:46.306 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:46.306 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:46.564 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:46.565 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:46.565 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:46.565 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:46.823 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.823 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:46.823 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:46.823 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:47.081 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:47.081 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:47.081 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:47.339 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:47.339 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:47.906 No valid GPT data, bailing 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:47.906 06:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:47.906 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:48.165 00:34:48.165 Discovery Log Number of Records 2, Generation counter 2 00:34:48.165 =====Discovery Log Entry 0====== 00:34:48.165 trtype: tcp 00:34:48.165 adrfam: ipv4 00:34:48.165 subtype: current discovery subsystem 00:34:48.165 treq: not specified, sq flow control disable supported 00:34:48.165 portid: 1 00:34:48.165 trsvcid: 4420 00:34:48.165 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:48.165 traddr: 10.0.0.1 00:34:48.165 eflags: none 00:34:48.165 sectype: none 00:34:48.165 =====Discovery Log Entry 1====== 00:34:48.165 trtype: tcp 00:34:48.165 adrfam: ipv4 00:34:48.165 subtype: nvme subsystem 00:34:48.165 treq: not specified, sq flow control disable supported 00:34:48.165 portid: 1 00:34:48.165 trsvcid: 4420 00:34:48.165 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:48.165 traddr: 10.0.0.1 00:34:48.165 eflags: none 00:34:48.165 sectype: none 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.165 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.165 nvme0n1 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.166 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.425 nvme0n1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.425 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.684 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.685 nvme0n1 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.685 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.944 nvme0n1 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:48.944 06:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.944 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.203 nvme0n1 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.203 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.462 nvme0n1 00:34:49.462 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.462 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.463 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.722 nvme0n1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.722 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.981 nvme0n1 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.981 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.982 06:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.240 nvme0n1 00:34:50.240 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.240 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.240 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.241 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.499 nvme0n1 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.500 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.759 nvme0n1 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.759 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.760 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.760 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.018 nvme0n1 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.018 06:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.019 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.278 nvme0n1 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.278 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.537 nvme0n1 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.537 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:51.795 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.796 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.054 nvme0n1 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 06:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.055 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.314 nvme0n1 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.314 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 nvme0n1 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:52.880 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.881 06:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.139 nvme0n1 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.139 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.140 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.707 nvme0n1 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.707 06:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.966 nvme0n1 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.966 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.225 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.484 nvme0n1 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:54.484 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.485 06:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.053 nvme0n1 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.053 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.311 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.880 nvme0n1 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.880 06:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.447 nvme0n1 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:56.447 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.448 06:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.016 nvme0n1 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.016 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.584 nvme0n1 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.584 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.843 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.844 nvme0n1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.844 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.103 06:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.103 nvme0n1 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.103 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.104 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.363 nvme0n1 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.363 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.622 nvme0n1 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.622 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.623 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.881 nvme0n1 00:34:58.881 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.882 06:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.141 nvme0n1 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.141 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.142 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.142 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.142 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.401 nvme0n1 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.401 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.660 nvme0n1 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.660 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.661 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.920 nvme0n1 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.920 06:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.179 nvme0n1 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.179 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.180 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.439 nvme0n1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.439 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.698 nvme0n1 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.698 06:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.957 nvme0n1 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.957 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.216 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.475 nvme0n1 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.475 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 nvme0n1 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.734 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.735 06:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.993 nvme0n1 00:35:01.993 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.252 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.511 nvme0n1 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.511 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.770 06:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.029 nvme0n1 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.029 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 nvme0n1 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.857 nvme0n1 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.857 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:04.115 06:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.115 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.116 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.684 nvme0n1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.684 06:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 nvme0n1 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.816 nvme0n1 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.816 06:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.383 nvme0n1 00:35:06.383 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.383 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.383 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.383 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.383 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.642 06:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.297 nvme0n1 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.297 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.298 nvme0n1 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.298 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.587 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.588 nvme0n1 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.588 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.847 nvme0n1 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.847 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.848 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.848 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.848 06:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.107 nvme0n1 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.107 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.366 nvme0n1 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.366 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.625 nvme0n1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.625 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.884 nvme0n1 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.884 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.885 06:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.143 nvme0n1 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.143 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.144 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.402 nvme0n1 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.402 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.660 nvme0n1 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.660 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.918 nvme0n1 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:09.918 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.919 06:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 nvme0n1 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.177 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.436 nvme0n1 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.436 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:10.694 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.695 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.953 nvme0n1 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:10.953 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.954 06:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.212 nvme0n1 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.212 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.213 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.778 nvme0n1 00:35:11.778 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.778 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.779 06:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.037 nvme0n1 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.037 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.038 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.605 nvme0n1 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.605 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.864 nvme0n1 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.864 06:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.122 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.381 nvme0n1 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTlkYjk0ZjAxNTZkZDNiNDU5MDEwNDU5NjExZTc1ZTeVgrgW: 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDJlYTRhYWJjZjExYWExZTYzZTA2NjFmN2QyZTE5ZWU2ODRkN2QwNjI4ZTZhNTQwNTQ2OWM5MGJmMDRmNDMxY/Rd6Ak=: 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.381 06:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.947 nvme0n1 00:35:13.947 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.947 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.947 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.948 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.948 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.206 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.772 nvme0n1 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.772 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.773 06:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.338 nvme0n1 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWFkYjQ4M2JkNTYzMGViYjM4M2QyZTVkMDkxYWE5OGI0NjEzMDk3ODFjYTE5MTlhs8kwCw==: 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmU4YjkzNzIzYmY5NTUwNDhiMjViZTA3NjBlOWVhYmNz+53J: 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.338 06:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.904 nvme0n1 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.904 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YThkYzc3YzhkNDNiMThjMmQyMzFhNGJhMTgyZWI0N2E4MjNlYjdhOTU5ZGZkNmU2ZGJmZWFmNmIyNGZkMjg3MgjPWIE=: 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.163 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.729 nvme0n1 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.729 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.730 request: 00:35:16.730 { 00:35:16.730 "name": "nvme0", 00:35:16.730 "trtype": "tcp", 00:35:16.730 "traddr": "10.0.0.1", 00:35:16.730 "adrfam": "ipv4", 00:35:16.730 "trsvcid": "4420", 00:35:16.730 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:16.730 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:16.730 "prchk_reftag": false, 00:35:16.730 "prchk_guard": false, 00:35:16.730 "hdgst": false, 00:35:16.730 "ddgst": false, 00:35:16.730 "allow_unrecognized_csi": false, 00:35:16.730 "method": "bdev_nvme_attach_controller", 00:35:16.730 "req_id": 1 00:35:16.730 } 00:35:16.730 Got JSON-RPC error response 00:35:16.730 response: 00:35:16.730 { 00:35:16.730 "code": -5, 00:35:16.730 "message": "Input/output error" 00:35:16.730 } 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.730 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.988 request: 00:35:16.988 { 00:35:16.989 "name": "nvme0", 00:35:16.989 "trtype": "tcp", 00:35:16.989 "traddr": "10.0.0.1", 00:35:16.989 "adrfam": "ipv4", 00:35:16.989 "trsvcid": "4420", 00:35:16.989 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:16.989 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:16.989 "prchk_reftag": false, 00:35:16.989 "prchk_guard": false, 00:35:16.989 "hdgst": false, 00:35:16.989 "ddgst": false, 00:35:16.989 "dhchap_key": "key2", 00:35:16.989 "allow_unrecognized_csi": false, 00:35:16.989 "method": "bdev_nvme_attach_controller", 00:35:16.989 "req_id": 1 00:35:16.989 } 00:35:16.989 Got JSON-RPC error response 00:35:16.989 response: 00:35:16.989 { 00:35:16.989 "code": -5, 00:35:16.989 "message": "Input/output error" 00:35:16.989 } 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.989 06:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.989 request: 00:35:16.989 { 00:35:16.989 "name": "nvme0", 00:35:16.989 "trtype": "tcp", 00:35:16.989 "traddr": "10.0.0.1", 00:35:16.989 "adrfam": "ipv4", 00:35:16.989 "trsvcid": "4420", 00:35:16.989 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:16.989 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:16.989 "prchk_reftag": false, 00:35:16.989 "prchk_guard": false, 00:35:16.989 "hdgst": false, 00:35:16.989 "ddgst": false, 00:35:16.989 "dhchap_key": "key1", 00:35:16.989 "dhchap_ctrlr_key": "ckey2", 00:35:16.989 "allow_unrecognized_csi": false, 00:35:16.989 "method": "bdev_nvme_attach_controller", 00:35:16.989 "req_id": 1 00:35:16.989 } 00:35:16.989 Got JSON-RPC error response 00:35:16.989 response: 00:35:16.989 { 00:35:16.989 "code": -5, 00:35:16.989 "message": "Input/output error" 00:35:16.989 } 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.989 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.247 nvme0n1 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.247 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.248 request: 00:35:17.248 { 00:35:17.248 "name": "nvme0", 00:35:17.248 "dhchap_key": "key1", 00:35:17.248 "dhchap_ctrlr_key": "ckey2", 00:35:17.248 "method": "bdev_nvme_set_keys", 00:35:17.248 "req_id": 1 00:35:17.248 } 00:35:17.248 Got JSON-RPC error response 00:35:17.248 response: 00:35:17.248 { 00:35:17.248 "code": -13, 00:35:17.248 "message": "Permission denied" 00:35:17.248 } 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.248 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.506 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:17.506 06:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:18.440 06:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.374 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJlZTQwNDM3ZGZmMTdkMjQ1NTVkNDg1ZGFmZjRkNmE3OWNlODg0MjVkYzhlN2IyGW4j5g==: 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: ]] 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDlmNTZiODBjZTNhNWEzOGYzNjgzYjJmZTg2ODg2MTE0MzIyMGZiMWVjZTVkZTk0T2nlvw==: 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.375 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.633 nvme0n1 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWRmMGFiMGE1NmFjODJkMTdhYjMyMWM3YTA1ZWM3MTV0NwRp: 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: ]] 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODFhYTc0ZGZhZDcyYjAyNWNmNTE2NTJlNTVkNzgyOWUWh+8V: 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.633 request: 00:35:19.633 { 00:35:19.633 "name": "nvme0", 00:35:19.633 "dhchap_key": "key2", 00:35:19.633 "dhchap_ctrlr_key": "ckey1", 00:35:19.633 "method": "bdev_nvme_set_keys", 00:35:19.633 "req_id": 1 00:35:19.633 } 00:35:19.633 Got JSON-RPC error response 00:35:19.633 response: 00:35:19.633 { 00:35:19.633 "code": -13, 00:35:19.633 "message": "Permission denied" 00:35:19.633 } 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:19.633 06:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.009 rmmod nvme_tcp 00:35:21.009 rmmod nvme_fabrics 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1180283 ']' 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1180283 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1180283 ']' 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1180283 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180283 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180283' 00:35:21.009 killing process with pid 1180283 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1180283 00:35:21.009 06:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1180283 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.009 06:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:23.574 06:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:26.109 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:26.109 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:27.046 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:27.046 06:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hYg /tmp/spdk.key-null.e4v /tmp/spdk.key-sha256.Glx /tmp/spdk.key-sha384.hLP /tmp/spdk.key-sha512.UxV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:27.046 06:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:30.335 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:30.335 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:30.335 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:30.336 00:35:30.336 real 0m53.759s 00:35:30.336 user 0m48.444s 00:35:30.336 sys 0m12.611s 00:35:30.336 06:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.336 06:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.336 ************************************ 00:35:30.336 END TEST nvmf_auth_host 00:35:30.336 ************************************ 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.336 ************************************ 00:35:30.336 START TEST nvmf_digest 00:35:30.336 ************************************ 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:30.336 * Looking for test storage... 00:35:30.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:30.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.336 --rc genhtml_branch_coverage=1 00:35:30.336 --rc genhtml_function_coverage=1 00:35:30.336 --rc genhtml_legend=1 00:35:30.336 --rc geninfo_all_blocks=1 00:35:30.336 --rc geninfo_unexecuted_blocks=1 00:35:30.336 00:35:30.336 ' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.336 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:30.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.337 06:25:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:36.907 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:36.907 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:36.907 Found net devices under 0000:af:00.0: cvl_0_0 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:36.907 Found net devices under 0000:af:00.1: cvl_0_1 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.907 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.908 06:25:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:35:36.908 00:35:36.908 --- 10.0.0.2 ping statistics --- 00:35:36.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.908 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:35:36.908 00:35:36.908 --- 10.0.0.1 ping statistics --- 00:35:36.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.908 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 ************************************ 00:35:36.908 START TEST nvmf_digest_clean 00:35:36.908 ************************************ 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1193764 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1193764 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193764 ']' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 [2024-12-15 06:25:56.213498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:36.908 [2024-12-15 06:25:56.213538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.908 [2024-12-15 06:25:56.291720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.908 [2024-12-15 06:25:56.312412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.908 [2024-12-15 06:25:56.312447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.908 [2024-12-15 06:25:56.312454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.908 [2024-12-15 06:25:56.312460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.908 [2024-12-15 06:25:56.312465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.908 [2024-12-15 06:25:56.312959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 null0 00:35:36.908 [2024-12-15 06:25:56.484210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.908 [2024-12-15 06:25:56.508412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1193789 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1193789 /var/tmp/bperf.sock 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193789 ']' 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:36.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.908 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:36.908 [2024-12-15 06:25:56.560930] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:36.908 [2024-12-15 06:25:56.560968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193789 ] 00:35:36.909 [2024-12-15 06:25:56.635126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.909 [2024-12-15 06:25:56.656823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.909 06:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.167 nvme0n1 00:35:37.167 06:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:37.167 06:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:37.426 Running I/O for 2 seconds... 00:35:39.297 25632.00 IOPS, 100.12 MiB/s [2024-12-15T05:25:59.437Z] 25125.00 IOPS, 98.14 MiB/s 00:35:39.297 Latency(us) 00:35:39.297 [2024-12-15T05:25:59.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.297 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:39.297 nvme0n1 : 2.00 25143.53 98.22 0.00 0.00 5086.12 2481.01 15291.73 00:35:39.297 [2024-12-15T05:25:59.437Z] =================================================================================================================== 00:35:39.297 [2024-12-15T05:25:59.437Z] Total : 25143.53 98.22 0.00 0.00 5086.12 2481.01 15291.73 00:35:39.297 { 00:35:39.297 "results": [ 00:35:39.297 { 00:35:39.297 "job": "nvme0n1", 00:35:39.297 "core_mask": "0x2", 00:35:39.297 "workload": "randread", 00:35:39.297 "status": "finished", 00:35:39.297 "queue_depth": 128, 00:35:39.297 "io_size": 4096, 00:35:39.297 "runtime": 2.003617, 00:35:39.297 "iops": 25143.527929739066, 00:35:39.297 "mibps": 98.21690597554323, 00:35:39.297 "io_failed": 0, 00:35:39.297 "io_timeout": 0, 00:35:39.297 "avg_latency_us": 5086.124869170027, 00:35:39.297 "min_latency_us": 2481.0057142857145, 00:35:39.297 "max_latency_us": 15291.733333333334 00:35:39.297 } 00:35:39.297 ], 00:35:39.297 "core_count": 1 00:35:39.297 } 00:35:39.297 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:39.297 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:39.297 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:39.297 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:39.297 | select(.opcode=="crc32c") 00:35:39.297 | "\(.module_name) \(.executed)"' 00:35:39.297 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1193789 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193789 ']' 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193789 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193789 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193789' 00:35:39.556 killing process with pid 1193789 00:35:39.556 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193789 00:35:39.556 Received shutdown signal, test time was about 2.000000 seconds 00:35:39.556 00:35:39.556 Latency(us) 00:35:39.556 [2024-12-15T05:25:59.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.557 [2024-12-15T05:25:59.697Z] =================================================================================================================== 00:35:39.557 [2024-12-15T05:25:59.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:39.557 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193789 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194251 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194251 /var/tmp/bperf.sock 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194251 ']' 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:39.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.816 06:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:39.816 [2024-12-15 06:25:59.853893] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:39.816 [2024-12-15 06:25:59.853941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194251 ] 00:35:39.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:39.816 Zero copy mechanism will not be used. 00:35:39.816 [2024-12-15 06:25:59.930115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.816 [2024-12-15 06:25:59.953044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.075 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.075 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:40.075 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:40.075 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:40.075 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:40.334 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.334 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.593 nvme0n1 00:35:40.593 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:40.593 06:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.852 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:40.852 Zero copy mechanism will not be used. 00:35:40.852 Running I/O for 2 seconds... 00:35:42.725 5576.00 IOPS, 697.00 MiB/s [2024-12-15T05:26:02.865Z] 5547.00 IOPS, 693.38 MiB/s 00:35:42.725 Latency(us) 00:35:42.725 [2024-12-15T05:26:02.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.725 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:42.725 nvme0n1 : 2.00 5547.09 693.39 0.00 0.00 2881.78 616.35 8363.64 00:35:42.725 [2024-12-15T05:26:02.865Z] =================================================================================================================== 00:35:42.725 [2024-12-15T05:26:02.865Z] Total : 5547.09 693.39 0.00 0.00 2881.78 616.35 8363.64 00:35:42.725 { 00:35:42.725 "results": [ 00:35:42.725 { 00:35:42.725 "job": "nvme0n1", 00:35:42.725 "core_mask": "0x2", 00:35:42.725 "workload": "randread", 00:35:42.725 "status": "finished", 00:35:42.725 "queue_depth": 16, 00:35:42.725 "io_size": 131072, 00:35:42.725 "runtime": 2.003031, 00:35:42.725 "iops": 5547.093379982636, 00:35:42.725 "mibps": 693.3866724978295, 00:35:42.725 "io_failed": 0, 00:35:42.725 "io_timeout": 0, 00:35:42.725 "avg_latency_us": 2881.7817887893166, 00:35:42.725 "min_latency_us": 616.3504761904762, 00:35:42.725 "max_latency_us": 8363.641904761906 00:35:42.725 } 00:35:42.725 ], 00:35:42.725 "core_count": 1 00:35:42.725 } 00:35:42.725 06:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:42.725 06:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:42.725 06:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:42.725 06:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:42.725 | select(.opcode=="crc32c") 00:35:42.725 | "\(.module_name) \(.executed)"' 00:35:42.725 06:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194251 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194251 ']' 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194251 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194251 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194251' 00:35:42.985 killing process with pid 1194251 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194251 00:35:42.985 Received shutdown signal, test time was about 2.000000 seconds 00:35:42.985 00:35:42.985 Latency(us) 00:35:42.985 [2024-12-15T05:26:03.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.985 [2024-12-15T05:26:03.125Z] =================================================================================================================== 00:35:42.985 [2024-12-15T05:26:03.125Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.985 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194251 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194917 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194917 /var/tmp/bperf.sock 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194917 ']' 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.243 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:43.243 [2024-12-15 06:26:03.267173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:43.244 [2024-12-15 06:26:03.267220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194917 ] 00:35:43.244 [2024-12-15 06:26:03.343111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.244 [2024-12-15 06:26:03.365701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.502 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.502 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:43.502 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:43.502 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:43.502 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:43.761 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.761 06:26:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:44.020 nvme0n1 00:35:44.020 06:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:44.020 06:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:44.278 Running I/O for 2 seconds... 00:35:46.154 28778.00 IOPS, 112.41 MiB/s [2024-12-15T05:26:06.294Z] 28885.00 IOPS, 112.83 MiB/s 00:35:46.154 Latency(us) 00:35:46.154 [2024-12-15T05:26:06.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.154 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:46.154 nvme0n1 : 2.01 28890.19 112.85 0.00 0.00 4424.50 1864.66 10298.51 00:35:46.154 [2024-12-15T05:26:06.294Z] =================================================================================================================== 00:35:46.154 [2024-12-15T05:26:06.294Z] Total : 28890.19 112.85 0.00 0.00 4424.50 1864.66 10298.51 00:35:46.154 { 00:35:46.154 "results": [ 00:35:46.154 { 00:35:46.154 "job": "nvme0n1", 00:35:46.154 "core_mask": "0x2", 00:35:46.154 "workload": "randwrite", 00:35:46.154 "status": "finished", 00:35:46.154 "queue_depth": 128, 00:35:46.154 "io_size": 4096, 00:35:46.154 "runtime": 2.006321, 00:35:46.154 "iops": 28890.19254645692, 00:35:46.154 "mibps": 112.85231463459735, 00:35:46.154 "io_failed": 0, 00:35:46.154 "io_timeout": 0, 00:35:46.154 "avg_latency_us": 4424.4958194841865, 00:35:46.154 "min_latency_us": 1864.655238095238, 00:35:46.154 "max_latency_us": 10298.514285714286 00:35:46.154 } 00:35:46.154 ], 00:35:46.154 "core_count": 1 00:35:46.154 } 00:35:46.154 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:46.155 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:46.155 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:46.155 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:46.155 | select(.opcode=="crc32c") 00:35:46.155 | "\(.module_name) \(.executed)"' 00:35:46.155 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194917 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194917 ']' 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194917 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194917 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194917' 00:35:46.414 killing process with pid 1194917 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194917 00:35:46.414 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.414 00:35:46.414 Latency(us) 00:35:46.414 [2024-12-15T05:26:06.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.414 [2024-12-15T05:26:06.554Z] =================================================================================================================== 00:35:46.414 [2024-12-15T05:26:06.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.414 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194917 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195376 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195376 /var/tmp/bperf.sock 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195376 ']' 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.673 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:46.673 [2024-12-15 06:26:06.664265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:46.673 [2024-12-15 06:26:06.664313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195376 ] 00:35:46.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:46.674 Zero copy mechanism will not be used. 00:35:46.674 [2024-12-15 06:26:06.739545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.674 [2024-12-15 06:26:06.759231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.932 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.932 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:46.932 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:46.932 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:46.933 06:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:47.191 06:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.191 06:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.191 nvme0n1 00:35:47.450 06:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:47.450 06:26:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:47.450 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:47.450 Zero copy mechanism will not be used. 00:35:47.450 Running I/O for 2 seconds... 00:35:49.323 6480.00 IOPS, 810.00 MiB/s [2024-12-15T05:26:09.463Z] 6698.00 IOPS, 837.25 MiB/s 00:35:49.323 Latency(us) 00:35:49.323 [2024-12-15T05:26:09.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.323 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:49.323 nvme0n1 : 2.00 6694.14 836.77 0.00 0.00 2385.59 1880.26 10797.84 00:35:49.323 [2024-12-15T05:26:09.463Z] =================================================================================================================== 00:35:49.323 [2024-12-15T05:26:09.463Z] Total : 6694.14 836.77 0.00 0.00 2385.59 1880.26 10797.84 00:35:49.323 { 00:35:49.323 "results": [ 00:35:49.323 { 00:35:49.323 "job": "nvme0n1", 00:35:49.323 "core_mask": "0x2", 00:35:49.323 "workload": "randwrite", 00:35:49.323 "status": "finished", 00:35:49.323 "queue_depth": 16, 00:35:49.323 "io_size": 131072, 00:35:49.323 "runtime": 2.003542, 00:35:49.323 "iops": 6694.144669789802, 00:35:49.323 "mibps": 836.7680837237252, 00:35:49.323 "io_failed": 0, 00:35:49.323 "io_timeout": 0, 00:35:49.323 "avg_latency_us": 2385.594387400054, 00:35:49.323 "min_latency_us": 1880.2590476190476, 00:35:49.323 "max_latency_us": 10797.83619047619 00:35:49.323 } 00:35:49.323 ], 00:35:49.323 "core_count": 1 00:35:49.323 } 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:49.582 | select(.opcode=="crc32c") 00:35:49.582 | "\(.module_name) \(.executed)"' 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195376 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195376 ']' 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195376 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.582 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195376 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195376' 00:35:49.841 killing process with pid 1195376 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195376 00:35:49.841 Received shutdown signal, test time was about 2.000000 seconds 00:35:49.841 00:35:49.841 Latency(us) 00:35:49.841 [2024-12-15T05:26:09.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.841 [2024-12-15T05:26:09.981Z] =================================================================================================================== 00:35:49.841 [2024-12-15T05:26:09.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195376 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1193764 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193764 ']' 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193764 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193764 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193764' 00:35:49.841 killing process with pid 1193764 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193764 00:35:49.841 06:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193764 00:35:50.101 00:35:50.101 real 0m13.939s 00:35:50.101 user 0m26.834s 00:35:50.101 sys 0m4.453s 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:50.101 ************************************ 00:35:50.101 END TEST nvmf_digest_clean 00:35:50.101 ************************************ 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:50.101 ************************************ 00:35:50.101 START TEST nvmf_digest_error 00:35:50.101 ************************************ 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1196013 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1196013 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196013 ']' 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.101 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.101 [2024-12-15 06:26:10.222577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:50.101 [2024-12-15 06:26:10.222621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.363 [2024-12-15 06:26:10.302244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.363 [2024-12-15 06:26:10.323238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.363 [2024-12-15 06:26:10.323274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.363 [2024-12-15 06:26:10.323281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.363 [2024-12-15 06:26:10.323287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.363 [2024-12-15 06:26:10.323292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.363 [2024-12-15 06:26:10.323785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.363 [2024-12-15 06:26:10.412275] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.363 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.363 null0 00:35:50.623 [2024-12-15 06:26:10.503497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.623 [2024-12-15 06:26:10.527681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196109 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196109 /var/tmp/bperf.sock 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196109 ']' 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:50.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.623 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.623 [2024-12-15 06:26:10.580807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:50.623 [2024-12-15 06:26:10.580846] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196109 ] 00:35:50.623 [2024-12-15 06:26:10.655487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:50.623 [2024-12-15 06:26:10.677222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:50.881 06:26:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:51.448 nvme0n1 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:51.448 06:26:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:51.448 Running I/O for 2 seconds... 00:35:51.448 [2024-12-15 06:26:11.509305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.509336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.509348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.520109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.520132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.520141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.529070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.529092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.529100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.540502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.540524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.540533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.551000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.551022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.551030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.559668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.559689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.559697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.568919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.568940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.568949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.448 [2024-12-15 06:26:11.578513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.448 [2024-12-15 06:26:11.578533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.448 [2024-12-15 06:26:11.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.588028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.588048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.588056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.596915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.596934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.596942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.606867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.606887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.606895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.615833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.615853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.615861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.625077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.625098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.625106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.634406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.634427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.643217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.643237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.643249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.652879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.652898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.652906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.665214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.665235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.676402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.676423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.676432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.684581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.684602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.684610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.694598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.694620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.703561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.703580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.703588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.712790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.712810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.712818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.721689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.721709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.721717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.730379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.730402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.730411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.741164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.741185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.741194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.749017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.749037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.749045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.758671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.758691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.758699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.769836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.769857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.769866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.778978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.779004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.779012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.787868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.787887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.787896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.797205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.797224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.797233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.806809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.708 [2024-12-15 06:26:11.806829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.708 [2024-12-15 06:26:11.806837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.708 [2024-12-15 06:26:11.815866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.709 [2024-12-15 06:26:11.815886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.709 [2024-12-15 06:26:11.815894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.709 [2024-12-15 06:26:11.824708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.709 [2024-12-15 06:26:11.824728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.709 [2024-12-15 06:26:11.824735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.709 [2024-12-15 06:26:11.835156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.709 [2024-12-15 06:26:11.835175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.709 [2024-12-15 06:26:11.835184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.709 [2024-12-15 06:26:11.844179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.709 [2024-12-15 06:26:11.844199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.709 [2024-12-15 06:26:11.844207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.853738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.853757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.853765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.862579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.862599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.862606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.871906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.871926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.882013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.882033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.882041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.892910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.892936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.892944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.905086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.905107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.905115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.916364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.916384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.924253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.924273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.924281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.934656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.934675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.934684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.947642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.947662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.947669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.955928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.955948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.955956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.966604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.966632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.979492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.979517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.979525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:11.989757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:11.989777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:11.989785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:12.000827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:12.000847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:12.000855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:12.011369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:12.011389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:12.011397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:12.019377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:12.019396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.968 [2024-12-15 06:26:12.019404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.968 [2024-12-15 06:26:12.029568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.968 [2024-12-15 06:26:12.029588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.029596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.038580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.038600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.038608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.049244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.049264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.049271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.059791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.059810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.059818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.068638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.068658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.068669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.082632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.082653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.091029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.091049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.091056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.969 [2024-12-15 06:26:12.103048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:51.969 [2024-12-15 06:26:12.103067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.969 [2024-12-15 06:26:12.103076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.111199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.111225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.123296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.123316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.123324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.133931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.133951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.133959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.142664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.142683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.153669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.153689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.161680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.161703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.161711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.174461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.174481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.174489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.186038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.186058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.186066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.197824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.197844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.210050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.210070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.210077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.221238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.221261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.221272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.230411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.230430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.230438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.241068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.241088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.241096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.248805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.248825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.248833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.259468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.259488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.259497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.267385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.267404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.228 [2024-12-15 06:26:12.267413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.228 [2024-12-15 06:26:12.278723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.228 [2024-12-15 06:26:12.278743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.278751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.288561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.288581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.288588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.298132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.298151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.298159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.306454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.306474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.306482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.317260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.317279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.317287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.327421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.327440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.327448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.335655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.335674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.335685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.346815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.346835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.346843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.229 [2024-12-15 06:26:12.355452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.229 [2024-12-15 06:26:12.355472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.229 [2024-12-15 06:26:12.355480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.366229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.366249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.366257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.376045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.376065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.376073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.386006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.386026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.386034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.393906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.393925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.393932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.405218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.405237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.405245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.415884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.415904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.415911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.425257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.425276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.425284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.435833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.435852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.435860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.444914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.444934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.444941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.455998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.456034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.456042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.465303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.465323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.474360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.474380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.483464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.483484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.483492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 [2024-12-15 06:26:12.492563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.488 [2024-12-15 06:26:12.492582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.488 [2024-12-15 06:26:12.492590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.488 25570.00 IOPS, 99.88 MiB/s [2024-12-15T05:26:12.629Z] [2024-12-15 06:26:12.502332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.502352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.502364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.510998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.511017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.511025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.520776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.520796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.520803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.531313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.531333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.531341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.542336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.542355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.542363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.550898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.550919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.550927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.560709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.560729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.560737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.570352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.570372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.570380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.580172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.580192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.580200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.588073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.588104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.599520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.599548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.611819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.611839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.611846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.489 [2024-12-15 06:26:12.623768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.489 [2024-12-15 06:26:12.623788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.489 [2024-12-15 06:26:12.623796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.632682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.632702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.632709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.642040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.642068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.652686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.652706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.652714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.661212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.661231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.661239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.672219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.672239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.672247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.682665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.682685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.682694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.695360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.695388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.706504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.706531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.715364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.715383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.715391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.724608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.724628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.724636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.734273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.748 [2024-12-15 06:26:12.734293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.748 [2024-12-15 06:26:12.734300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.748 [2024-12-15 06:26:12.745864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.745894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.755742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.755762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.755770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.764275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.764295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.764306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.775508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.775531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.775539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.784986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.785012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.785020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.793640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.793661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.793668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.803109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.803129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.803137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.812391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.812411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.812418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.822205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.822226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.822234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.831896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.831917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.839649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.839670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.839678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.849892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.859989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.860016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.860024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.869692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.869713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.869721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.749 [2024-12-15 06:26:12.879201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:52.749 [2024-12-15 06:26:12.879222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.749 [2024-12-15 06:26:12.879230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.034 [2024-12-15 06:26:12.888785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.034 [2024-12-15 06:26:12.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.034 [2024-12-15 06:26:12.888813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.034 [2024-12-15 06:26:12.897988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.034 [2024-12-15 06:26:12.898015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.898023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.907999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.908019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.908027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.916598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.916618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.916626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.925617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.925638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.925646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.935439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.935461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.935471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.945809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.945831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.945839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.955083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.955103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.955111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.963607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.963627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.963635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.972301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.972322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.972330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.981226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.981246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.981254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:12.992055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:12.992076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:12.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.000224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.012072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.012093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.012104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.020862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.020891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.031891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.031912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.031920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.041059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.041079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.041087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.049845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.049865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.049873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.060067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.060087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.060095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.069541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.069562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.069570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.079475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.079495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.079502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.088578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.088599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.088607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.097746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.097766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.097774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.106024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.106044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.106052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.116307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.116328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.126769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.126790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.126799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.137160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.137179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.137187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.035 [2024-12-15 06:26:13.145828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.035 [2024-12-15 06:26:13.145849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.035 [2024-12-15 06:26:13.145857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.157396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.157419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.157427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.168177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.168198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.168206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.177168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.177188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.177201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.186978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.187004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.187013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.196112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.196131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.206926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.206946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.206954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.215404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.215424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.215431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.227182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.227202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.238306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.238325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.238333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.247715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.247742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.259688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.259717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.272455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.272479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.272487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.283504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.283523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.283531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.293027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.293046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.293054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.305078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.305099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.305107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.313422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.313442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.313450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.325831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.325853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.325860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.337945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.350 [2024-12-15 06:26:13.337973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.350 [2024-12-15 06:26:13.346183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.350 [2024-12-15 06:26:13.346203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.357627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.357646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.357654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.369490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.369509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.369517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.377465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.377486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.377494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.388181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.388201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.388209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.396484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.396503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.396511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.408357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.408377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.408384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.420196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.420217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.420225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.431193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.431213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.431221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.443921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.443941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.443948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.452158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.452179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.452191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.463080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.463100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.463108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.351 [2024-12-15 06:26:13.472244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.351 [2024-12-15 06:26:13.472264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.351 [2024-12-15 06:26:13.472271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.664 [2024-12-15 06:26:13.480385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.664 [2024-12-15 06:26:13.480407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.664 [2024-12-15 06:26:13.480415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.664 [2024-12-15 06:26:13.490497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.664 [2024-12-15 06:26:13.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.664 [2024-12-15 06:26:13.490526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.664 25584.50 IOPS, 99.94 MiB/s [2024-12-15T05:26:13.804Z] [2024-12-15 06:26:13.499915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9396e0) 00:35:53.664 [2024-12-15 06:26:13.499936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.664 [2024-12-15 06:26:13.499944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:53.664 00:35:53.664 Latency(us) 00:35:53.664 [2024-12-15T05:26:13.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.664 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:53.664 nvme0n1 : 2.00 25608.68 100.03 0.00 0.00 4992.65 2309.36 18100.42 00:35:53.664 [2024-12-15T05:26:13.804Z] =================================================================================================================== 00:35:53.664 [2024-12-15T05:26:13.804Z] Total : 25608.68 100.03 0.00 0.00 4992.65 2309.36 18100.42 00:35:53.664 { 00:35:53.664 "results": [ 00:35:53.664 { 00:35:53.664 "job": "nvme0n1", 00:35:53.664 "core_mask": "0x2", 00:35:53.664 "workload": "randread", 00:35:53.664 "status": "finished", 00:35:53.664 "queue_depth": 128, 00:35:53.664 "io_size": 4096, 00:35:53.664 "runtime": 2.003696, 00:35:53.664 "iops": 25608.675168289003, 00:35:53.664 "mibps": 100.03388737612892, 00:35:53.664 "io_failed": 0, 00:35:53.664 "io_timeout": 0, 00:35:53.664 "avg_latency_us": 4992.654188011345, 00:35:53.664 "min_latency_us": 2309.3638095238093, 00:35:53.664 "max_latency_us": 18100.41904761905 00:35:53.664 } 00:35:53.664 ], 00:35:53.664 "core_count": 1 00:35:53.664 } 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:53.664 | .driver_specific 00:35:53.664 | .nvme_error 00:35:53.664 | .status_code 00:35:53.664 | .command_transient_transport_error' 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196109 00:35:53.664 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196109 ']' 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196109 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196109 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196109' 00:35:53.665 killing process with pid 1196109 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196109 00:35:53.665 Received shutdown signal, test time was about 2.000000 seconds 00:35:53.665 00:35:53.665 Latency(us) 00:35:53.665 [2024-12-15T05:26:13.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.665 [2024-12-15T05:26:13.805Z] =================================================================================================================== 00:35:53.665 [2024-12-15T05:26:13.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.665 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196109 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196578 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196578 /var/tmp/bperf.sock 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196578 ']' 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.924 06:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.924 [2024-12-15 06:26:13.979299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:53.925 [2024-12-15 06:26:13.979351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196578 ] 00:35:53.925 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:53.925 Zero copy mechanism will not be used. 00:35:53.925 [2024-12-15 06:26:14.055706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.184 [2024-12-15 06:26:14.077828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.184 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.184 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:54.184 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:54.184 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.443 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:54.703 nvme0n1 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:54.703 06:26:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:54.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:54.963 Zero copy mechanism will not be used. 00:35:54.963 Running I/O for 2 seconds... 00:35:54.963 [2024-12-15 06:26:14.910324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.963 [2024-12-15 06:26:14.910362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.963 [2024-12-15 06:26:14.910373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.963 [2024-12-15 06:26:14.916124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.963 [2024-12-15 06:26:14.916165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.963 [2024-12-15 06:26:14.916175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.963 [2024-12-15 06:26:14.922059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.963 [2024-12-15 06:26:14.922083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.963 [2024-12-15 06:26:14.922091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.963 [2024-12-15 06:26:14.927470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.963 [2024-12-15 06:26:14.927493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.963 [2024-12-15 06:26:14.927502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.963 [2024-12-15 06:26:14.930345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.930366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.935732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.935754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.935762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.941107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.941129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.941137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.946437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.946458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.946466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.951724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.951749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.951757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.956960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.956980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.956988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.962117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.962137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.962148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.967330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.967351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.967359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.972550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.972572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.972580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.977869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.977890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.977899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.983238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.983261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.983269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.988627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.988649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.993869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.993891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.993899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:14.999270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:14.999291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:14.999300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.004667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.004688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.010023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.010048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.010056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.015229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.015250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.015259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.020481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.020511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.025721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.025741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.025749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.030915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.030935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.030943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.036348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.036369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.036377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.041962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.041983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.041997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.047427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.047448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.047456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.052871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.052893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.052901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.058219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.058241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.058249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.063536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.063557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.063564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.068871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.068891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.964 [2024-12-15 06:26:15.068899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.964 [2024-12-15 06:26:15.074198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.964 [2024-12-15 06:26:15.074219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-15 06:26:15.074226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.965 [2024-12-15 06:26:15.079585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.965 [2024-12-15 06:26:15.079605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-15 06:26:15.079613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.965 [2024-12-15 06:26:15.085110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.965 [2024-12-15 06:26:15.085131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-15 06:26:15.085139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.965 [2024-12-15 06:26:15.090474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.965 [2024-12-15 06:26:15.090495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-15 06:26:15.090503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.965 [2024-12-15 06:26:15.095879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:54.965 [2024-12-15 06:26:15.095901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-15 06:26:15.095908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.101381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.101418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.107037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.107059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.107067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.112490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.112510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.112517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.117905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.117925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.117933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.123348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.123368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.123376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.128695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.128714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.128723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.134068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.134090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.134098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.139393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.139414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.139422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.144928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.144948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.144956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.150386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.150410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.150419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.156039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.156060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.156068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.161730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.161751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.161759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.166807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.166828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.166836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.172141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.172162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.172170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.177555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.177575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.177584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.182934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.182954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.188317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.188338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.188346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.193778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.193799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.193807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.199218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.199238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.199245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.204798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.204819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.204827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.210201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.210222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.210229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.215648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.215669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.215676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.221291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.221310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.221318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.226690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.226711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.226719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.232254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.232274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.232282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.237580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.237601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.237609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.242926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.242946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.242958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.248187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.248208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.248216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.253557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.253582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.253590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.259026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.259047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.259054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.264231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.264252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.264260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.269669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.269691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.269699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.275243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.275264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.275272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.280746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.280767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.225 [2024-12-15 06:26:15.280775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.225 [2024-12-15 06:26:15.286055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.225 [2024-12-15 06:26:15.286075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.286083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.291338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.291359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.291366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.296570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.296590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.296598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.302329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.302349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.302357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.308182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.308203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.308211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.314475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.314496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.314504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.321845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.321875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.329921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.329943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.329952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.337648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.337670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.337678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.345416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.345438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.345449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.352768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.352790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.352798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.226 [2024-12-15 06:26:15.360967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.226 [2024-12-15 06:26:15.360988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.226 [2024-12-15 06:26:15.361004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.368010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.368033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.368052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.375641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.375663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.375671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.383188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.383210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.383219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.391226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.391248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.391257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.398748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.398769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.406597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.406627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.414093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.414119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.414127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.421825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.421852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.421860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.428874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.428896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.434502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.434524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.434532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.439818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.439839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.439847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.445236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.445257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.445265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.450719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.450740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.456141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.456170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.461411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.461431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.461439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.466812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.466833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.466841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.472167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.472187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.472196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.477592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.477613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.477621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.482917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.486 [2024-12-15 06:26:15.482938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.486 [2024-12-15 06:26:15.482946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.486 [2024-12-15 06:26:15.488111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.488132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.488140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.493202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.493223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.493231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.498471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.498492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.498499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.503543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.503563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.503571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.508699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.508721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.508731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.514001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.514021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.519111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.519139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.524482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.524504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.524512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.530031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.530052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.530059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.535463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.535484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.535492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.540677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.540697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.540705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.546259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.546280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.546288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.551448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.551468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.551476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.556767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.556792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.556800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.562184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.562205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.562213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.567356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.567384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.572688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.572707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.572715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.577932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.577953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.577960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.583237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.583257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.583265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.588585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.588605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.588613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.593813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.593834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.593842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.599022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.599044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.599052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.604151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.604172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.604179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.609417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.609439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.609447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.614776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.614797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.614805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.487 [2024-12-15 06:26:15.620144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.487 [2024-12-15 06:26:15.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.487 [2024-12-15 06:26:15.620173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.625549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.625572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.625580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.630858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.630880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.630888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.636193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.636215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.641477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.641497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.641505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.646860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.646882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.646893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.652047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.652068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.652076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.657186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.657207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.657215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.661955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.661976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.661985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.667433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.667454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.667462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.672334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.672355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.672364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.676030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.676051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.676059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.683463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.683485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.683493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.689936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.689957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.689965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.695872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.695897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.695906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.701456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.749 [2024-12-15 06:26:15.701477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.749 [2024-12-15 06:26:15.701486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.749 [2024-12-15 06:26:15.707221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.707243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.707251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.712886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.712906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.712913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.718319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.718352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.723758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.723779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.723787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.729248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.729270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.729278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.734594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.734616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.734624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.740068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.740097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.745095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.745118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.745126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.750519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.750540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.750548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.755782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.755803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.755811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.760891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.760915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.760934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.766183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.766205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.766213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.771437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.771468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.776829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.776852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.776860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.781748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.781770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.781778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.785024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.785046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.785058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.789475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.789498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.789505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.794660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.794682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.794690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.799851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.799872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.799880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.805264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.805286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.805294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.810698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.810720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.810729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.815975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.816003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.816012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.821172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.821194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.821202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.826401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.826429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.831358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.831381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.831389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.836310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.836333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.836341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.841386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.841407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.841415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.750 [2024-12-15 06:26:15.846364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.750 [2024-12-15 06:26:15.846386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.750 [2024-12-15 06:26:15.846394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.851363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.851385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.851393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.857095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.857119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.857129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.862402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.862425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.862433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.867681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.867703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.867712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.872953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.872975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.872987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.878213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.878235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.878243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.751 [2024-12-15 06:26:15.883430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:55.751 [2024-12-15 06:26:15.883452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.751 [2024-12-15 06:26:15.883460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.888672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.888694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.888704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.893918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.893940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.893949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.899100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.899122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.899130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.904241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.904263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.904271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.012 5608.00 IOPS, 701.00 MiB/s [2024-12-15T05:26:16.152Z] [2024-12-15 06:26:15.910390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.910412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.910420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.915454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.915476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.915484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.920580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.920606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.920614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.925816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.925838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.925847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.931142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.931165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.931173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.936650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.936672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.942005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.942027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.942036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.947375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.947398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.947406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.952759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.952780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.952788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.958164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.958187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.963640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.963662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.963670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.969063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.012 [2024-12-15 06:26:15.969093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.012 [2024-12-15 06:26:15.974598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.012 [2024-12-15 06:26:15.974620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:15.974629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:15.980017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:15.980038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:15.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:15.985314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:15.985335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:15.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:15.990693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:15.990714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:15.990722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:15.996017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:15.996038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:15.996046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.001542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.001564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.001573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.006909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.006932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.006941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.012220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.012243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.012255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.017801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.017824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.017833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.022976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.023013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.028117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.028139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.028148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.033387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.033410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.033418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.038572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.038594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.038602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.043687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.043709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.043717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.048858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.048880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.048888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.053989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.054017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.054025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.059112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.059134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.059142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.064285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.064307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.064315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.069591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.069613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.069621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.075062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.075085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.075093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.080587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.080609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.080618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.085969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.085998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.086007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.091187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.091210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.091218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.096802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.096824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.096833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.102957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.102979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.102990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.108290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.108312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.108321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.113666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.113687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.113695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.119249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.119270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.013 [2024-12-15 06:26:16.119278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.013 [2024-12-15 06:26:16.124508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.013 [2024-12-15 06:26:16.124531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.014 [2024-12-15 06:26:16.124539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.014 [2024-12-15 06:26:16.129686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.014 [2024-12-15 06:26:16.129708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.014 [2024-12-15 06:26:16.129716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.014 [2024-12-15 06:26:16.135055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.014 [2024-12-15 06:26:16.135076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.014 [2024-12-15 06:26:16.135084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.014 [2024-12-15 06:26:16.140594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.014 [2024-12-15 06:26:16.140616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.014 [2024-12-15 06:26:16.140624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.014 [2024-12-15 06:26:16.145950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.014 [2024-12-15 06:26:16.145972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.014 [2024-12-15 06:26:16.145980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.151307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.151341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.156599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.156621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.156629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.161837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.161859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.161867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.167116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.167139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.172373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.172394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.177619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.177641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.177649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.182822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.182843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.182851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.188254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.188276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.188284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.193649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.193671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.193679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.198897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.198918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.198926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.204314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.204335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.204343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.209591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.209612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.209620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.215039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.215060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.215068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.220278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.220299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.225536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.225559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.225567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.230736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.230758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.230766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.235951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.235972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.235980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.241195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.241217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.241229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.246497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.275 [2024-12-15 06:26:16.246518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.275 [2024-12-15 06:26:16.246526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.275 [2024-12-15 06:26:16.251978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.252015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.257321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.257343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.257351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.262795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.262817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.262825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.268139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.268169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.268177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.273351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.273373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.273381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.278592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.278622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.283962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.283984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.284008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.289300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.289325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.289333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.294661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.294689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.299975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.300002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.300010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.305463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.305486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.305494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.311009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.311030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.316359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.316380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.321695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.321716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.321724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.326888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.326910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.326917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.332211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.332232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.332241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.337472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.337493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.337502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.342870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.342891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.342899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.348229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.348251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.348259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.353626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.353648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.353656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.358914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.358936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.358944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.364501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.364523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.364532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.369887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.369909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.369917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.375147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.375169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.375177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.380295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.380320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.380329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.385421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.385442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.385451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.391242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.391264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.391272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.276 [2024-12-15 06:26:16.398075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.276 [2024-12-15 06:26:16.398099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.276 [2024-12-15 06:26:16.398107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.277 [2024-12-15 06:26:16.405942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.277 [2024-12-15 06:26:16.405965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.277 [2024-12-15 06:26:16.405974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.413546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.413571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.413580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.421109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.421132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.421140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.429029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.429055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.429066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.436918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.436942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.444672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.444694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.444703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.452126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.452149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.452158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.459935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.459959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.537 [2024-12-15 06:26:16.459968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.537 [2024-12-15 06:26:16.467848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.537 [2024-12-15 06:26:16.467870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.467879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.475984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.476013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.476022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.483768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.483791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.483799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.491148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.491180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.498940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.498963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.498971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.506154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.506176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.506188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.511958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.511979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.511988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.517887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.523881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.523902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.523912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.529592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.529615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.529624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.535798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.535821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.535830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.542110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.542133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.542142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.549573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.549596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.549604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.556945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.556967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.556976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.564364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.564393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.564402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.571914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.571937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.571946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.579572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.579595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.579603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.586122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.586144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.586152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.591697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.591719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.597054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.597076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.597085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.600429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.600449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.600458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.606092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.606115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.606123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.611675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.611697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.611705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.616608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.616630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.616638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.621925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.621946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.621955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.627156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.627186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.633138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.633161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.633170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.639870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.538 [2024-12-15 06:26:16.639892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.538 [2024-12-15 06:26:16.639901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.538 [2024-12-15 06:26:16.646926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.539 [2024-12-15 06:26:16.646948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.539 [2024-12-15 06:26:16.646956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.539 [2024-12-15 06:26:16.655038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.539 [2024-12-15 06:26:16.655060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.539 [2024-12-15 06:26:16.655069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.539 [2024-12-15 06:26:16.662191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.539 [2024-12-15 06:26:16.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.539 [2024-12-15 06:26:16.662223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.539 [2024-12-15 06:26:16.668877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.539 [2024-12-15 06:26:16.668899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.539 [2024-12-15 06:26:16.668911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.799 [2024-12-15 06:26:16.675790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.799 [2024-12-15 06:26:16.675813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.799 [2024-12-15 06:26:16.675822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.799 [2024-12-15 06:26:16.683543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.799 [2024-12-15 06:26:16.683565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.799 [2024-12-15 06:26:16.683573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.799 [2024-12-15 06:26:16.690154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.799 [2024-12-15 06:26:16.690177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.799 [2024-12-15 06:26:16.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.799 [2024-12-15 06:26:16.696607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.799 [2024-12-15 06:26:16.696630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.799 [2024-12-15 06:26:16.696638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.799 [2024-12-15 06:26:16.702336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.702359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.702367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.707614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.707635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.707643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.710593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.710614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.710622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.715922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.715943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.715951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.721593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.721614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.721622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.726839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.726860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.726868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.732113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.732133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.732142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.737334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.737354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.737362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.742590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.742610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.742619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.747765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.747785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.753052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.753073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.753081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.758381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.758401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.758409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.763638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.763659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.763670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.768862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.768883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.768891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.774232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.774252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.774261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.779546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.779575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.784749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.784769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.784777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.790166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.790186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.790194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.795369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.795390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.795398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.800634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.800654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.805938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.805959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.805967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.811292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.811316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.811324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.816627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.816648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.816656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.821513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.821535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.821543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.826808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.826830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.826838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.832166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.832188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.832197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.837766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.837789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.800 [2024-12-15 06:26:16.837797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.800 [2024-12-15 06:26:16.843152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.800 [2024-12-15 06:26:16.843174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.843183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.848578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.848600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.848608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.854134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.854155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.854163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.859528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.859551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.859559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.865787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.865817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.872236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.872258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.872267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.877746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.877768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.877776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.882960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.882982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.882990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.888170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.888191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.888200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.893374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.893395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.893403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.898546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.898568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.898576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.903741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.903762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.903774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:56.801 [2024-12-15 06:26:16.908907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ce130) 00:35:56.801 [2024-12-15 06:26:16.908929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:56.801 [2024-12-15 06:26:16.908937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:56.801 5525.00 IOPS, 690.62 MiB/s 00:35:56.801 Latency(us) 00:35:56.801 [2024-12-15T05:26:16.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.801 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:56.801 nvme0n1 : 2.00 5524.34 690.54 0.00 0.00 2893.67 659.26 8301.23 00:35:56.801 [2024-12-15T05:26:16.941Z] =================================================================================================================== 00:35:56.801 [2024-12-15T05:26:16.941Z] Total : 5524.34 690.54 0.00 0.00 2893.67 659.26 8301.23 00:35:56.801 { 00:35:56.801 "results": [ 00:35:56.801 { 00:35:56.801 "job": "nvme0n1", 00:35:56.801 "core_mask": "0x2", 00:35:56.801 "workload": "randread", 00:35:56.801 "status": "finished", 00:35:56.801 "queue_depth": 16, 00:35:56.801 "io_size": 131072, 00:35:56.801 "runtime": 2.003137, 00:35:56.801 "iops": 5524.335080426351, 00:35:56.801 "mibps": 690.5418850532939, 00:35:56.801 "io_failed": 0, 00:35:56.801 "io_timeout": 0, 00:35:56.801 "avg_latency_us": 2893.6736691539077, 00:35:56.801 "min_latency_us": 659.2609523809524, 00:35:56.801 "max_latency_us": 8301.226666666667 00:35:56.801 } 00:35:56.801 ], 00:35:56.801 "core_count": 1 00:35:56.801 } 00:35:57.061 06:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:57.061 06:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:57.061 06:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:57.061 | .driver_specific 00:35:57.061 | .nvme_error 00:35:57.061 | .status_code 00:35:57.061 | .command_transient_transport_error' 00:35:57.061 06:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 357 > 0 )) 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196578 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196578 ']' 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196578 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196578 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196578' 00:35:57.061 killing process with pid 1196578 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196578 00:35:57.061 Received shutdown signal, test time was about 2.000000 seconds 00:35:57.061 00:35:57.061 Latency(us) 00:35:57.061 [2024-12-15T05:26:17.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.061 [2024-12-15T05:26:17.201Z] =================================================================================================================== 00:35:57.061 [2024-12-15T05:26:17.201Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.061 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196578 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197178 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197178 /var/tmp/bperf.sock 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197178 ']' 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.320 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:57.320 [2024-12-15 06:26:17.381851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:57.320 [2024-12-15 06:26:17.381897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197178 ] 00:35:57.320 [2024-12-15 06:26:17.455401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.579 [2024-12-15 06:26:17.477705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.580 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.580 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:57.580 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:57.580 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:57.839 06:26:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:58.098 nvme0n1 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:58.098 06:26:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:58.098 Running I/O for 2 seconds... 00:35:58.098 [2024-12-15 06:26:18.234178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.098 [2024-12-15 06:26:18.234333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.098 [2024-12-15 06:26:18.234362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.358 [2024-12-15 06:26:18.243540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.243686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.243710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.253101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.253246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.253268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.262517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.262658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.271929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.272077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.281259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.281398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.281420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.290548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.290688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.290710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.299875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.300020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.300041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.309176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.309317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.309337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.318516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.318659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.318678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.327796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.327936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.327958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.337079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.337218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.337239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.346358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.346498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.346519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.355621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.355762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.355782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.364883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.365050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.374166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.374311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.383432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.383572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.383592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.392696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.392838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.392858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.401950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.402099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.402120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.411219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.411360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.411380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.420464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.420605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.420624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.429729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.429889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.439011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.439153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.439173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.448278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.448418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.448437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.457552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.457691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.457711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.466819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.466959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.466979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.476077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.476218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.476238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.485415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.359 [2024-12-15 06:26:18.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.359 [2024-12-15 06:26:18.485575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.359 [2024-12-15 06:26:18.494942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.360 [2024-12-15 06:26:18.495098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.360 [2024-12-15 06:26:18.495118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.504439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.504583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.504603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.513706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.513846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.523248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.523392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.523412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.532642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.532782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.532808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.541935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.542086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.542106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.551204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.551364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.560463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.560603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.560622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.569759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.569897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.569916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.579003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.579144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.620 [2024-12-15 06:26:18.579164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.620 [2024-12-15 06:26:18.588358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.620 [2024-12-15 06:26:18.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.588519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.597625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.597767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.597786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.606902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.607075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.616181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.616325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.616346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.625464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.625607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.625627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.634759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.634898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.634919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.644036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.644177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.644197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.653302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.653442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.653463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.662588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.662729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.662750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.671872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.672016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.672035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.681155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.681296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.681315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.690428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.690570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.690590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.699706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.699847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.699866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.708973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.709123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.709142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.718244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.718384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.718403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.727504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.727648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.727668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.736783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.736923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.736941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.746239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.746378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.746398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.621 [2024-12-15 06:26:18.755631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.621 [2024-12-15 06:26:18.755776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.621 [2024-12-15 06:26:18.755795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.765138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.765281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.765301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.774387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.774530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.774553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.783659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.783802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.783822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.792919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.793069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.793089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.802195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.802352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.811468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.811628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.820735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.820877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.820897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.830026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.881 [2024-12-15 06:26:18.830167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.881 [2024-12-15 06:26:18.830187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.881 [2024-12-15 06:26:18.839367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.839509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.839529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.848652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.848794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.848813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.857935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.858089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.858109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.867213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.867372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.876483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.876624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.876643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.885752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.885895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.885914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.895043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.895185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.895204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.904363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.904501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.904521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.913637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.913774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.922882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.923032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.923051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.932180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.932322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.932342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.941437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.941578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.941598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.950722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.950874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.950894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.959986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.960133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.960152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.969327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.978585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.978725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.978745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.987876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.988020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.988037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:18.997327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:18.997467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:18.997486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:19.006619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:19.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:19.006778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:58.882 [2024-12-15 06:26:19.015899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:58.882 [2024-12-15 06:26:19.016045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.882 [2024-12-15 06:26:19.016066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.025268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.025409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.025429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.034573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.034712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.034732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.043837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.043977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.053108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.053247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.062383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.062522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.071637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.071779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.071799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.080912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.081060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.081079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.090204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.090343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.090362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.099474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.099619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.108737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.108878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.108898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.118002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.118144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.118164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.127287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.127426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.127446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.136557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.136696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.136716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.145819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.145959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.145977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.155102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.155241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.155260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.164362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.164510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.164529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.173624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.173764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.173784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.182893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.183039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.183059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.192176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.192319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.192339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.143 [2024-12-15 06:26:19.201443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.143 [2024-12-15 06:26:19.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.143 [2024-12-15 06:26:19.201603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.210701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.210840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.210860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.219985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.220133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.220153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 27299.00 IOPS, 106.64 MiB/s [2024-12-15T05:26:19.284Z] [2024-12-15 06:26:19.229262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.229402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.229422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.238663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.238800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.238819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.248119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.248282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.257629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.257772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.257795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.267018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.267158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.267178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.144 [2024-12-15 06:26:19.276283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.144 [2024-12-15 06:26:19.276426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.144 [2024-12-15 06:26:19.276445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.285800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.285946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.285967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.295090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.295255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.304373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.304513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.304533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.313648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.313790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.313810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.322903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.323051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.323070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.332217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.332360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.332380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.341492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.341634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.341654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.351102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.351245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.351265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.360478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.360618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.360636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.369763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.369925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.379041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.379181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.379201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.388330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.388470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.388490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.397598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.397746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.397765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.406854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.406999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.407018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.416131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.416293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.425448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.425587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.425607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.434716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.434856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.434876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.443986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.453261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.453407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.404 [2024-12-15 06:26:19.453427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.404 [2024-12-15 06:26:19.462526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.404 [2024-12-15 06:26:19.462667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.462686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.471788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.471929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.471948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.481067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.481207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.481226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.490330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.490471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.490489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.499790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.499961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.509064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.509204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.509225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.518329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.518470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.518489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.527596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.527735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.527755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.405 [2024-12-15 06:26:19.536892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.405 [2024-12-15 06:26:19.537046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.405 [2024-12-15 06:26:19.537066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.546554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.546693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.546713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.555816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.555956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.555975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.565081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.565222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.565242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.574341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.574481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.574500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.583612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.583759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.583779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.592876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.593019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.593039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.602139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.602280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.602301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.611400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.611540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.611560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.620661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.620801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.629933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.630079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.630099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.665 [2024-12-15 06:26:19.639946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.665 [2024-12-15 06:26:19.640122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.665 [2024-12-15 06:26:19.640149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.649816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.649979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.659125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.659268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.659288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.668392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.668532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.677657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.677797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.677816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.686918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.687076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.687096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.696191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.696329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.696348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.705479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.705638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.714752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.714893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.714912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.723998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.724139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.724159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.733297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.733448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.733468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.742535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.742673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.742695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.752068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.752212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.752232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.761400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.761541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.761560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.770749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.770889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.770908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.779977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.780127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.780146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.789259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.789399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.789418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.666 [2024-12-15 06:26:19.798582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.666 [2024-12-15 06:26:19.798724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.666 [2024-12-15 06:26:19.798744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.808021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.808162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.817281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.817419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.826542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.826687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.826707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.835913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.836064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.836084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.845441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.845585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.845605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.854953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.855101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.855121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.864658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.864802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.864823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.874073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.874213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.874232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.883398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.883539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.883558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.892663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.892804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.892824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.901918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.902062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.902082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.911177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.911321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.911341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.920463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.926 [2024-12-15 06:26:19.920605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.926 [2024-12-15 06:26:19.920624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.926 [2024-12-15 06:26:19.929727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.929869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.929888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.939025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.939186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.948390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.948534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.948554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.957934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.958106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.967373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.967513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.976646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.976788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.976807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.985927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.986079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.986102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:19.995231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:19.995371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:19.995391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.004896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.005049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.005068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.014440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.014603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.014623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.024763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.024910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.024932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.034325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.034470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.034490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.043925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.044103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.044126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.054036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:35:59.927 [2024-12-15 06:26:20.054183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:59.927 [2024-12-15 06:26:20.054204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:59.927 [2024-12-15 06:26:20.063593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.063737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.063758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.073149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.073302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.073323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.082711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.082876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.093188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.093336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.093357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.102611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.102752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.102772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.112134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.112277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.112297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.121678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.187 [2024-12-15 06:26:20.121820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.187 [2024-12-15 06:26:20.121840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.187 [2024-12-15 06:26:20.131226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.131370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.140778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.140941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.150302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.150445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.150465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.159840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.159984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.160010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.169357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.169501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.178912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.179083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.188418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.188561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.188581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.197961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.198117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.198137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.207460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.207603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.207623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 [2024-12-15 06:26:20.217012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.217156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.217181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 27236.50 IOPS, 106.39 MiB/s [2024-12-15T05:26:20.328Z] [2024-12-15 06:26:20.226568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f03dc0) with pdu=0x200016efef90 00:36:00.188 [2024-12-15 06:26:20.226712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:00.188 [2024-12-15 06:26:20.226731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:00.188 00:36:00.188 Latency(us) 00:36:00.188 [2024-12-15T05:26:20.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.188 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:00.188 nvme0n1 : 2.01 27234.51 106.38 0.00 0.00 4691.64 3510.86 11609.23 00:36:00.188 [2024-12-15T05:26:20.328Z] =================================================================================================================== 00:36:00.188 [2024-12-15T05:26:20.328Z] Total : 27234.51 106.38 0.00 0.00 4691.64 3510.86 11609.23 00:36:00.188 { 00:36:00.188 "results": [ 00:36:00.188 { 00:36:00.188 "job": "nvme0n1", 00:36:00.188 "core_mask": "0x2", 00:36:00.188 "workload": "randwrite", 00:36:00.188 "status": "finished", 00:36:00.188 "queue_depth": 128, 00:36:00.188 "io_size": 4096, 00:36:00.188 "runtime": 2.006021, 00:36:00.188 "iops": 27234.51050612132, 00:36:00.188 "mibps": 106.38480666453641, 00:36:00.188 "io_failed": 0, 00:36:00.188 "io_timeout": 0, 00:36:00.188 "avg_latency_us": 4691.638791764614, 00:36:00.188 "min_latency_us": 3510.8571428571427, 00:36:00.188 "max_latency_us": 11609.234285714285 00:36:00.188 } 00:36:00.188 ], 00:36:00.188 "core_count": 1 00:36:00.188 } 00:36:00.188 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:00.188 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:00.188 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:00.188 | .driver_specific 00:36:00.188 | .nvme_error 00:36:00.188 | .status_code 00:36:00.188 | .command_transient_transport_error' 00:36:00.188 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197178 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197178 ']' 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197178 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197178 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197178' 00:36:00.448 killing process with pid 1197178 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197178 00:36:00.448 Received shutdown signal, test time was about 2.000000 seconds 00:36:00.448 00:36:00.448 Latency(us) 00:36:00.448 [2024-12-15T05:26:20.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.448 [2024-12-15T05:26:20.588Z] =================================================================================================================== 00:36:00.448 [2024-12-15T05:26:20.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.448 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197178 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197711 00:36:00.707 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197711 /var/tmp/bperf.sock 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197711 ']' 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:00.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.708 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.708 [2024-12-15 06:26:20.720276] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:00.708 [2024-12-15 06:26:20.720335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197711 ] 00:36:00.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:00.708 Zero copy mechanism will not be used. 00:36:00.708 [2024-12-15 06:26:20.793436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:00.708 [2024-12-15 06:26:20.813099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.967 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.967 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:00.967 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:00.967 06:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:00.967 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:00.967 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.967 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.226 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.226 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.226 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.226 nvme0n1 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:01.485 06:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:01.485 Zero copy mechanism will not be used. 00:36:01.485 Running I/O for 2 seconds... 00:36:01.485 [2024-12-15 06:26:21.490348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.485 [2024-12-15 06:26:21.490444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.485 [2024-12-15 06:26:21.490473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.485 [2024-12-15 06:26:21.495700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.485 [2024-12-15 06:26:21.495789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.485 [2024-12-15 06:26:21.495811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.485 [2024-12-15 06:26:21.500854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.485 [2024-12-15 06:26:21.500928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.485 [2024-12-15 06:26:21.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.485 [2024-12-15 06:26:21.506125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.485 [2024-12-15 06:26:21.506214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.485 [2024-12-15 06:26:21.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.485 [2024-12-15 06:26:21.511343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.485 [2024-12-15 06:26:21.511415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.485 [2024-12-15 06:26:21.511435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.516747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.516845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.516863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.521919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.522016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.522036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.526920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.526988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.527014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.532137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.532200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.532220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.537871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.538034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.538055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.543863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.543955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.543974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.548999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.549062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.549081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.554121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.554252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.554273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.559293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.559389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.559408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.564259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.564346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.564365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.569330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.569427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.569446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.574605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.574702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.574720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.579696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.579798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.584787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.584893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.584911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.589860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.589950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.589968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.595373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.595457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.595475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.600501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.600583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.600601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.605566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.605670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.605689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.610704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.610878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.610899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.615764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.615872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.615890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.486 [2024-12-15 06:26:21.620721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.486 [2024-12-15 06:26:21.620869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.486 [2024-12-15 06:26:21.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.625610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.625692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.747 [2024-12-15 06:26:21.625711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.630570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.630756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.747 [2024-12-15 06:26:21.630777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.635754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.635923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.747 [2024-12-15 06:26:21.635944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.640527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.640638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.747 [2024-12-15 06:26:21.640656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.645393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.645551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.747 [2024-12-15 06:26:21.645572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.747 [2024-12-15 06:26:21.650237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.747 [2024-12-15 06:26:21.650315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.650334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.655016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.655111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.655130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.660374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.660470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.660489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.665062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.665152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.665171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.671172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.671351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.671371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.677624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.677805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.677827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.683829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.683888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.683907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.689898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.689957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.689977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.695623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.695676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.695695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.701079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.701150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.701170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.706361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.706420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.706440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.711190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.711258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.711277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.716034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.716130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.716149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.721497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.721565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.721585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.726807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.726863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.726883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.731810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.731926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.731947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.736657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.736718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.736738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.741390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.741500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.741519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.746266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.746326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.746346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.751374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.751430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.751449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.756068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.756136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.760877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.761000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.766544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.766614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.766632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.773239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.773375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.773396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.778667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.778822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.778843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.784140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.784225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.789245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.789322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.789341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.794213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.748 [2024-12-15 06:26:21.794294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.748 [2024-12-15 06:26:21.794312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.748 [2024-12-15 06:26:21.799151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.799346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.805437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.805609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.811227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.811306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.811325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.817540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.817706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.817727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.824029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.824095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.824114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.830448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.830585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.830605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.837480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.837540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.837559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.845090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.845229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.845251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.852545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.852683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.852703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.860088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.860214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.860234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.867281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.867457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.867478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.874507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.874675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.874695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.749 [2024-12-15 06:26:21.882576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:01.749 [2024-12-15 06:26:21.882658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.749 [2024-12-15 06:26:21.882677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.888641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.888724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.888742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.893881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.893939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.893958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.899552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.899610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.899629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.904931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.904985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.905010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.909984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.910042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.910060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.914858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.914913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.914934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.920081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.920138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.920156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.925693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.925755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.930716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.930768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.930786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.935645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.935699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.935716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.940271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.940333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.940351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.944902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.944965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.944983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.949381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.949452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.949470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.954025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.954087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.954106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.958695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.958840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.963349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.963430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.963448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.968173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.968231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.968249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.972665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.972736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.972755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.977296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.977349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.977368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.982053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.982170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.987453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.987534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.987552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.992506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.992580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.992598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:21.997719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:21.997772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:21.997790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:22.003312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.010 [2024-12-15 06:26:22.003458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.010 [2024-12-15 06:26:22.003478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.010 [2024-12-15 06:26:22.008521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.008646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.008666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.013737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.013807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.013826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.019034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.019089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.019107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.024120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.024180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.024199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.029671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.029829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.035119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.035254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.035274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.040326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.040393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.040411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.045743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.045798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.045821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.050768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.050821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.050839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.055809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.055872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.055891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.061411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.061476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.061495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.066301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.066368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.066386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.070922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.070988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.071013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.075794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.075849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.075867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.081140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.081252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.086118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.086171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.086190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.091178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.091328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.091348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.097054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.097111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.097131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.102017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.102078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.102096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.106996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.107060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.107079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.112318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.112370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.117481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.117541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.117560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.123051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.123164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.123182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.128092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.128148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.128167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.133448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.133563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.133582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.138447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.138499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.138518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.011 [2024-12-15 06:26:22.143447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.011 [2024-12-15 06:26:22.143536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.011 [2024-12-15 06:26:22.143554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.148840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.148914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.154154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.154222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.154241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.159113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.159256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.159278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.164171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.164227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.164245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.169126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.169222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.272 [2024-12-15 06:26:22.174000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.272 [2024-12-15 06:26:22.174065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.272 [2024-12-15 06:26:22.174083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.178527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.178750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.184303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.184357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.184376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.189472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.189546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.189564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.194535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.194591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.199731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.199789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.204667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.209445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.209550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.214138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.214239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.214258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.219575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.219646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.219664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.224706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.224844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.224864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.229741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.229866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.229887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.234702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.234795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.234813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.239582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.239686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.239704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.244634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.244688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.244706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.250646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.250702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.250722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.255840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.255969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.255990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.260923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.260977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.265863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.265918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.265936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.271149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.271217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.276154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.276207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.276225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.281800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.281854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.281872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.286883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.286976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.291808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.291882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.291900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.296877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.297039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.297060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.301860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.301934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.301953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.307664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.307755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.307773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.312793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.273 [2024-12-15 06:26:22.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.273 [2024-12-15 06:26:22.317637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.273 [2024-12-15 06:26:22.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.317721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.322333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.322432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.327212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.327283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.327302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.332433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.332485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.332504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.337363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.337450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.337468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.342727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.342784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.347310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.347381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.347399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.352517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.352593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.352611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.357501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.357561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.357579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.362323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.362395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.362413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.367027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.367093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.367112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.371710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.371773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.371792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.377074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.377130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.377147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.382049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.382123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.382142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.386685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.386737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.386756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.391343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.391404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.391423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.395692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.395758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.395777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.400222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.400299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.400317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.274 [2024-12-15 06:26:22.404853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.274 [2024-12-15 06:26:22.404962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.274 [2024-12-15 06:26:22.404981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.535 [2024-12-15 06:26:22.409524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.535 [2024-12-15 06:26:22.409584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.535 [2024-12-15 06:26:22.409603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.535 [2024-12-15 06:26:22.414124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.535 [2024-12-15 06:26:22.414181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.535 [2024-12-15 06:26:22.414200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.535 [2024-12-15 06:26:22.418756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.535 [2024-12-15 06:26:22.418813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.418831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.423299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.423406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.423424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.427783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.427834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.427852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.432177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.432251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.432270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.436484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.436593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.441164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.441228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.441247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.445694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.445770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.445789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.451314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.451364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.451383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.456248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.456337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.456355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.461508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.461564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.461582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.466568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.466644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.466664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.472235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.472307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.472325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.477792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.477899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.477917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.483180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.483312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.483332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.489977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.491373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.491393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 5921.00 IOPS, 740.12 MiB/s [2024-12-15T05:26:22.676Z] [2024-12-15 06:26:22.497660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.497806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.497825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.504850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.504918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.504936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.511629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.511711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.518149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.518245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.518263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.524814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.524881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.524900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.529454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.529525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.529543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.534004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.534072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.534091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.538439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.538514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.538532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.542849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.542924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.542943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.547249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.547351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.547370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.551735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.551803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.551821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.556189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.556262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.556280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.536 [2024-12-15 06:26:22.560539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.536 [2024-12-15 06:26:22.560604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.536 [2024-12-15 06:26:22.560622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.564950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.565030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.565048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.569595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.569656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.569675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.574253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.574314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.574335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.578621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.578677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.578695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.583025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.583080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.583097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.587405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.587487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.592337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.592489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.592506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.598399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.598571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.598590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.604890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.605089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.611076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.611151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.611169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.617526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.617683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.617705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.625144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.625224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.625243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.630732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.630788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.630807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.635448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.635543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.635564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.640110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.640181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.640200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.644595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.644664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.644682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.649029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.649092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.649110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.653631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.653689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.653708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.658466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.658602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.658620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.663766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.663835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.663853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.537 [2024-12-15 06:26:22.669395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.537 [2024-12-15 06:26:22.669468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.537 [2024-12-15 06:26:22.669487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.674271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.674332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.674349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.679093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.679159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.679178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.684036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.684108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.689323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.689412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.694181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.694240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.694258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.698981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.699062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.699080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.703802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.703857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.703875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.708844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.708897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.708918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.714088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.714162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.719175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.719237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.719254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.723793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.723859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.723877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.728728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.728793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.728811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.733588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.733679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.733700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.738873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.739059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.739080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.744106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.744232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.749312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.749365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.749383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.754654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.754782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.760270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.760344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.760363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.765388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.765440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.765458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.799 [2024-12-15 06:26:22.770639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.799 [2024-12-15 06:26:22.770704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.799 [2024-12-15 06:26:22.770723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.775760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.775819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.775838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.781014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.781069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.781088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.786207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.786274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.786293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.791321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.791380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.791398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.796558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.796635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.801777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.801830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.801850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.807839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.808008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.812684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.812751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.812769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.817386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.817472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.821979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.822041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.822059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.826571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.826632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.826651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.831428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.831489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.831508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.836246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.836303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.836321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.840988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.841082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.845940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.846008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.846027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.850582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.850640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.850659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.855350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.855411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.855430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.860889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.860970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.866029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.866086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.871358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.871415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.871434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.876114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.876172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.876191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.880833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.880956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.880975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.885441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.885496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.885514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.889929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.889987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.890014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.894535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.894601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.899311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.899365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.899383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.903786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.903897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.800 [2024-12-15 06:26:22.903915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.800 [2024-12-15 06:26:22.908555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.800 [2024-12-15 06:26:22.908611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.908629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.801 [2024-12-15 06:26:22.912968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.801 [2024-12-15 06:26:22.913038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.913056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.801 [2024-12-15 06:26:22.917373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.801 [2024-12-15 06:26:22.917431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.801 [2024-12-15 06:26:22.921843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.801 [2024-12-15 06:26:22.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.921919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.801 [2024-12-15 06:26:22.926598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.801 [2024-12-15 06:26:22.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.926689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.801 [2024-12-15 06:26:22.931728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:02.801 [2024-12-15 06:26:22.931788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.801 [2024-12-15 06:26:22.931807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.062 [2024-12-15 06:26:22.937940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.062 [2024-12-15 06:26:22.938080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.062 [2024-12-15 06:26:22.938099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.062 [2024-12-15 06:26:22.945285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.062 [2024-12-15 06:26:22.945418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.062 [2024-12-15 06:26:22.945437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.062 [2024-12-15 06:26:22.952434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.062 [2024-12-15 06:26:22.952495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.062 [2024-12-15 06:26:22.952514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.062 [2024-12-15 06:26:22.958518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.062 [2024-12-15 06:26:22.958582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.062 [2024-12-15 06:26:22.958601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.062 [2024-12-15 06:26:22.964138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.062 [2024-12-15 06:26:22.964193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.964211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.969133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.969203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.969221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.973854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.973921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.973943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.978616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.978690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.983119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.983206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.987524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.987598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.987616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.992107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.992160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.992179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:22.996707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:22.996762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:22.996781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.001337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.001393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.001411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.006015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.006081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.006099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.010812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.010877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.010895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.015507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.015572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.015590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.020323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.020381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.020398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.024763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.024845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.029280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.029380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.029399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.034197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.034284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.034303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.040189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.040243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.040262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.045492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.045588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.045606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.050376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.050445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.050463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.055145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.055211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.055229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.059846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.059903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.059922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.064773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.064834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.064853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.069352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.069417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.069435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.073641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.073700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.073718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.077930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.077988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.078013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.082226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.082278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.082297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.086520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.086574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.086592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.063 [2024-12-15 06:26:23.090748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.063 [2024-12-15 06:26:23.090810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.063 [2024-12-15 06:26:23.090828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.095201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.095264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.099501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.099569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.099587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.103984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.104050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.104068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.108316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.108379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.108397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.112631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.112686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.112704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.116867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.116933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.116952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.121200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.121261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.121279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.125522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.125591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.125610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.129998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.130076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.134272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.134335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.134353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.138501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.138550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.138568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.142782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.142838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.142857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.147096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.147171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.151316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.151417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.151436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.155586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.155656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.155675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.159894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.159946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.159964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.164217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.164277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.164295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.168860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.168918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.168935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.173613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.173682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.173700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.178292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.178350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.178368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.182646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.182702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.182720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.186964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.187046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.187064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.191249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.191318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.191336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.064 [2024-12-15 06:26:23.195705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.064 [2024-12-15 06:26:23.195758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.064 [2024-12-15 06:26:23.195776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.200047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.200113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.204469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.204520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.204539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.208802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.208876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.208899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.213321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.213390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.213409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.217712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.217812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.217830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.222173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.222241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.222259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.226539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.226607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.226627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.231053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.231112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.231131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.235394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.235455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.235473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.239701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.326 [2024-12-15 06:26:23.239765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.326 [2024-12-15 06:26:23.239783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.326 [2024-12-15 06:26:23.244241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.244309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.244328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.248653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.248723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.248741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.253426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.253499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.257908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.257979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.262385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.262453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.262472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.266865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.266916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.266935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.271238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.271298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.271317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.275628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.275699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.279946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.280008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.280025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.284488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.284555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.284574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.288943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.289001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.289019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.293258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.293325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.293342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.297692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.297764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.297782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.302056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.302127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.306483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.306548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.306565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.311017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.311074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.311092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.315411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.315512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.315530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.319836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.319894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.319912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.324154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.324211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.324233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.328494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.328561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.328579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.333084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.333151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.337373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.337436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.337454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.341632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.341699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.341718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.346180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.346246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.346264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.350588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.350654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.350671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.355020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.355082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.355100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.359378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.359443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.359462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.363743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.327 [2024-12-15 06:26:23.363818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.327 [2024-12-15 06:26:23.363837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.327 [2024-12-15 06:26:23.368100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.368221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.372776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.372866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.372885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.377439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.377493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.377511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.381869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.381939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.386251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.386311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.386329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.390665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.390729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.390747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.394971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.395054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.395072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.399515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.399585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.399603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.403864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.403936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.403955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.408266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.408384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.408403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.412747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.412809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.412827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.417330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.417387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.417405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.422075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.422139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.422157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.427048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.427106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.427124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.432042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.432159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.436964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.437057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.437076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.441585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.441639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.441660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.446369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.446454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.446472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.451255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.451385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.451403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.456185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.456273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.456291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.328 [2024-12-15 06:26:23.461290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.328 [2024-12-15 06:26:23.461358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.328 [2024-12-15 06:26:23.461376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.465961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.466016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.466035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.470537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.470602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.470620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.475066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.475120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.475138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.479641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.479697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.479715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.484473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.484561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.484580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:03.588 [2024-12-15 06:26:23.488836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.488899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:03.588 6199.50 IOPS, 774.94 MiB/s [2024-12-15T05:26:23.728Z] [2024-12-15 06:26:23.494220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f04100) with pdu=0x200016eff3c8 00:36:03.588 [2024-12-15 06:26:23.494274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.588 [2024-12-15 06:26:23.494292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:03.588 00:36:03.588 Latency(us) 00:36:03.588 [2024-12-15T05:26:23.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.588 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:03.588 nvme0n1 : 2.00 6196.88 774.61 0.00 0.00 2577.87 1934.87 8113.98 00:36:03.588 [2024-12-15T05:26:23.728Z] =================================================================================================================== 00:36:03.588 [2024-12-15T05:26:23.728Z] Total : 6196.88 774.61 0.00 0.00 2577.87 1934.87 8113.98 00:36:03.588 { 00:36:03.588 "results": [ 00:36:03.588 { 00:36:03.588 "job": "nvme0n1", 00:36:03.588 "core_mask": "0x2", 00:36:03.588 "workload": "randwrite", 00:36:03.588 "status": "finished", 00:36:03.588 "queue_depth": 16, 00:36:03.588 "io_size": 131072, 00:36:03.588 "runtime": 2.003427, 00:36:03.588 "iops": 6196.881643304198, 00:36:03.588 "mibps": 774.6102054130248, 00:36:03.588 "io_failed": 0, 00:36:03.588 "io_timeout": 0, 00:36:03.588 "avg_latency_us": 2577.8728189785784, 00:36:03.588 "min_latency_us": 1934.872380952381, 00:36:03.588 "max_latency_us": 8113.980952380953 00:36:03.588 } 00:36:03.588 ], 00:36:03.588 "core_count": 1 00:36:03.588 } 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:03.588 | .driver_specific 00:36:03.588 | .nvme_error 00:36:03.588 | .status_code 00:36:03.588 | .command_transient_transport_error' 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197711 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197711 ']' 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197711 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.588 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197711 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197711' 00:36:03.848 killing process with pid 1197711 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197711 00:36:03.848 Received shutdown signal, test time was about 2.000000 seconds 00:36:03.848 00:36:03.848 Latency(us) 00:36:03.848 [2024-12-15T05:26:23.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.848 [2024-12-15T05:26:23.988Z] =================================================================================================================== 00:36:03.848 [2024-12-15T05:26:23.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197711 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1196013 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196013 ']' 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196013 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196013 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196013' 00:36:03.848 killing process with pid 1196013 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196013 00:36:03.848 06:26:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196013 00:36:04.107 00:36:04.107 real 0m13.961s 00:36:04.107 user 0m26.752s 00:36:04.107 sys 0m4.550s 00:36:04.107 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.107 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.108 ************************************ 00:36:04.108 END TEST nvmf_digest_error 00:36:04.108 ************************************ 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.108 rmmod nvme_tcp 00:36:04.108 rmmod nvme_fabrics 00:36:04.108 rmmod nvme_keyring 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1196013 ']' 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1196013 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1196013 ']' 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1196013 00:36:04.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1196013) - No such process 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1196013 is not found' 00:36:04.108 Process with pid 1196013 is not found 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.108 06:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.662 00:36:06.662 real 0m36.263s 00:36:06.662 user 0m55.400s 00:36:06.662 sys 0m13.543s 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.662 ************************************ 00:36:06.662 END TEST nvmf_digest 00:36:06.662 ************************************ 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.662 ************************************ 00:36:06.662 START TEST nvmf_bdevperf 00:36:06.662 ************************************ 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:06.662 * Looking for test storage... 00:36:06.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.662 --rc genhtml_branch_coverage=1 00:36:06.662 --rc genhtml_function_coverage=1 00:36:06.662 --rc genhtml_legend=1 00:36:06.662 --rc geninfo_all_blocks=1 00:36:06.662 --rc geninfo_unexecuted_blocks=1 00:36:06.662 00:36:06.662 ' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.662 --rc genhtml_branch_coverage=1 00:36:06.662 --rc genhtml_function_coverage=1 00:36:06.662 --rc genhtml_legend=1 00:36:06.662 --rc geninfo_all_blocks=1 00:36:06.662 --rc geninfo_unexecuted_blocks=1 00:36:06.662 00:36:06.662 ' 00:36:06.662 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:06.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.662 --rc genhtml_branch_coverage=1 00:36:06.662 --rc genhtml_function_coverage=1 00:36:06.662 --rc genhtml_legend=1 00:36:06.663 --rc geninfo_all_blocks=1 00:36:06.663 --rc geninfo_unexecuted_blocks=1 00:36:06.663 00:36:06.663 ' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:06.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:06.663 --rc genhtml_branch_coverage=1 00:36:06.663 --rc genhtml_function_coverage=1 00:36:06.663 --rc genhtml_legend=1 00:36:06.663 --rc geninfo_all_blocks=1 00:36:06.663 --rc geninfo_unexecuted_blocks=1 00:36:06.663 00:36:06.663 ' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:06.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:06.663 06:26:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:13.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:13.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.238 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:13.239 Found net devices under 0000:af:00.0: cvl_0_0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:13.239 Found net devices under 0000:af:00.1: cvl_0_1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:36:13.239 00:36:13.239 --- 10.0.0.2 ping statistics --- 00:36:13.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.239 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:36:13.239 00:36:13.239 --- 10.0.0.1 ping statistics --- 00:36:13.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.239 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1201652 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1201652 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1201652 ']' 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 [2024-12-15 06:26:32.572325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:13.239 [2024-12-15 06:26:32.572376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.239 [2024-12-15 06:26:32.652958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:13.239 [2024-12-15 06:26:32.676016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.239 [2024-12-15 06:26:32.676053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.239 [2024-12-15 06:26:32.676061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.239 [2024-12-15 06:26:32.676067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.239 [2024-12-15 06:26:32.676072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.239 [2024-12-15 06:26:32.677414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:13.239 [2024-12-15 06:26:32.677517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.239 [2024-12-15 06:26:32.677518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 [2024-12-15 06:26:32.809276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 Malloc0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.239 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.240 [2024-12-15 06:26:32.881553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.240 { 00:36:13.240 "params": { 00:36:13.240 "name": "Nvme$subsystem", 00:36:13.240 "trtype": "$TEST_TRANSPORT", 00:36:13.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.240 "adrfam": "ipv4", 00:36:13.240 "trsvcid": "$NVMF_PORT", 00:36:13.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.240 "hdgst": ${hdgst:-false}, 00:36:13.240 "ddgst": ${ddgst:-false} 00:36:13.240 }, 00:36:13.240 "method": "bdev_nvme_attach_controller" 00:36:13.240 } 00:36:13.240 EOF 00:36:13.240 )") 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:13.240 06:26:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:13.240 "params": { 00:36:13.240 "name": "Nvme1", 00:36:13.240 "trtype": "tcp", 00:36:13.240 "traddr": "10.0.0.2", 00:36:13.240 "adrfam": "ipv4", 00:36:13.240 "trsvcid": "4420", 00:36:13.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:13.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:13.240 "hdgst": false, 00:36:13.240 "ddgst": false 00:36:13.240 }, 00:36:13.240 "method": "bdev_nvme_attach_controller" 00:36:13.240 }' 00:36:13.240 [2024-12-15 06:26:32.935194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:13.240 [2024-12-15 06:26:32.935238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201683 ] 00:36:13.240 [2024-12-15 06:26:33.010875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.240 [2024-12-15 06:26:33.033346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.240 Running I/O for 1 seconds... 00:36:14.620 11369.00 IOPS, 44.41 MiB/s 00:36:14.620 Latency(us) 00:36:14.620 [2024-12-15T05:26:34.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:14.620 Verification LBA range: start 0x0 length 0x4000 00:36:14.620 Nvme1n1 : 1.01 11371.90 44.42 0.00 0.00 11212.83 2356.18 11858.90 00:36:14.620 [2024-12-15T05:26:34.760Z] =================================================================================================================== 00:36:14.620 [2024-12-15T05:26:34.760Z] Total : 11371.90 44.42 0.00 0.00 11212.83 2356.18 11858.90 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1202019 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.620 { 00:36:14.620 "params": { 00:36:14.620 "name": "Nvme$subsystem", 00:36:14.620 "trtype": "$TEST_TRANSPORT", 00:36:14.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.620 "adrfam": "ipv4", 00:36:14.620 "trsvcid": "$NVMF_PORT", 00:36:14.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.620 "hdgst": ${hdgst:-false}, 00:36:14.620 "ddgst": ${ddgst:-false} 00:36:14.620 }, 00:36:14.620 "method": "bdev_nvme_attach_controller" 00:36:14.620 } 00:36:14.620 EOF 00:36:14.620 )") 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:14.620 06:26:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:14.620 "params": { 00:36:14.620 "name": "Nvme1", 00:36:14.620 "trtype": "tcp", 00:36:14.620 "traddr": "10.0.0.2", 00:36:14.620 "adrfam": "ipv4", 00:36:14.620 "trsvcid": "4420", 00:36:14.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.620 "hdgst": false, 00:36:14.620 "ddgst": false 00:36:14.620 }, 00:36:14.620 "method": "bdev_nvme_attach_controller" 00:36:14.620 }' 00:36:14.620 [2024-12-15 06:26:34.568325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:14.620 [2024-12-15 06:26:34.568377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202019 ] 00:36:14.620 [2024-12-15 06:26:34.645759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.620 [2024-12-15 06:26:34.665738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.879 Running I/O for 15 seconds... 00:36:16.764 11292.00 IOPS, 44.11 MiB/s [2024-12-15T05:26:37.846Z] 11403.00 IOPS, 44.54 MiB/s [2024-12-15T05:26:37.846Z] 06:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1201652 00:36:17.706 06:26:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:17.706 [2024-12-15 06:26:37.535029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.706 [2024-12-15 06:26:37.535385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.706 [2024-12-15 06:26:37.535395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:17.707 [2024-12-15 06:26:37.535817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.535980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.707 [2024-12-15 06:26:37.535988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.707 [2024-12-15 06:26:37.536112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.708 [2024-12-15 06:26:37.536704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.708 [2024-12-15 06:26:37.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.536987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.536999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.709 [2024-12-15 06:26:37.537100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.537109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3920 is same with the state(6) to be set 00:36:17.709 [2024-12-15 06:26:37.537118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:17.709 [2024-12-15 06:26:37.537124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:17.709 [2024-12-15 06:26:37.537130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111672 len:8 PRP1 0x0 PRP2 0x0 00:36:17.709 [2024-12-15 06:26:37.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:17.709 [2024-12-15 06:26:37.539987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.709 [2024-12-15 06:26:37.540045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.709 [2024-12-15 06:26:37.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.709 [2024-12-15 06:26:37.540584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.709 [2024-12-15 06:26:37.540591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.709 [2024-12-15 06:26:37.540766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.709 [2024-12-15 06:26:37.540939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.709 [2024-12-15 06:26:37.540948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.709 [2024-12-15 06:26:37.540956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.709 [2024-12-15 06:26:37.540964] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.709 [2024-12-15 06:26:37.553196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.709 [2024-12-15 06:26:37.553665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.709 [2024-12-15 06:26:37.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.709 [2024-12-15 06:26:37.553692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.709 [2024-12-15 06:26:37.553866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.709 [2024-12-15 06:26:37.554044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.709 [2024-12-15 06:26:37.554055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.709 [2024-12-15 06:26:37.554065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.709 [2024-12-15 06:26:37.554072] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.709 [2024-12-15 06:26:37.566063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.709 [2024-12-15 06:26:37.566386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.709 [2024-12-15 06:26:37.566405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.709 [2024-12-15 06:26:37.566413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.709 [2024-12-15 06:26:37.566581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.709 [2024-12-15 06:26:37.566750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.709 [2024-12-15 06:26:37.566759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.709 [2024-12-15 06:26:37.566766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.566773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.578880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.579263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.579281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.579290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.579464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.579647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.579655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.579662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.579669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.591746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.592124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.592143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.592151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.592320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.592489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.592498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.592504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.592511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.604737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.605185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.605203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.605210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.605378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.605547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.605555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.605561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.605567] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.617563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.618003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.618049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.618072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.618656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.619026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.619034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.619041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.619048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.630310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.630693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.630709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.630716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.630874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.631039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.631063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.631070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.631077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.643069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.643489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.643505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.643515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.643683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.643851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.643859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.643866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.643872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.655946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.656301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.656317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.656325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.656492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.656660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.656668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.656675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.656681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.668768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.669217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.710 [2024-12-15 06:26:37.669244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.710 [2024-12-15 06:26:37.669251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.710 [2024-12-15 06:26:37.669410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.710 [2024-12-15 06:26:37.669569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.710 [2024-12-15 06:26:37.669579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.710 [2024-12-15 06:26:37.669585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.710 [2024-12-15 06:26:37.669591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.710 [2024-12-15 06:26:37.681574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.710 [2024-12-15 06:26:37.681986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.682009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.682018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.682177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.682340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.682350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.682357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.682363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.694405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.694837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.694883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.694906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.695315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.695477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.695486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.695492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.695498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.707365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.707839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.707863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.708462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.709059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.709086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.709107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.709128] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.720186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.720607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.720649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.720675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.721239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.721410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.721419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.721428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.721435] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.732925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.733327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.733398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.733954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.734143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.734153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.734159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.734167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.745760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.746171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.746189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.746196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.746355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.746515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.746525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.746532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.746538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.758612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.759003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.759020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.759029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.759189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.759349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.759358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.759364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.759371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.771451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.771874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.771920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.771944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.772416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.772587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.772597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.772604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.772610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.784200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.784609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.784626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.784635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.784794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.784954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.784963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.784969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.784975] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.797193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.797637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.711 [2024-12-15 06:26:37.797645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.711 [2024-12-15 06:26:37.797818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.711 [2024-12-15 06:26:37.797997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.711 [2024-12-15 06:26:37.798008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.711 [2024-12-15 06:26:37.798015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.711 [2024-12-15 06:26:37.798022] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.711 [2024-12-15 06:26:37.810215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.711 [2024-12-15 06:26:37.810545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.711 [2024-12-15 06:26:37.810562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.712 [2024-12-15 06:26:37.810573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.712 [2024-12-15 06:26:37.810742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.712 [2024-12-15 06:26:37.810911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.712 [2024-12-15 06:26:37.810921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.712 [2024-12-15 06:26:37.810928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.712 [2024-12-15 06:26:37.810934] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.712 [2024-12-15 06:26:37.823201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.712 [2024-12-15 06:26:37.823620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.712 [2024-12-15 06:26:37.823660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.712 [2024-12-15 06:26:37.823686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.712 [2024-12-15 06:26:37.824253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.712 [2024-12-15 06:26:37.824425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.712 [2024-12-15 06:26:37.824435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.712 [2024-12-15 06:26:37.824441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.712 [2024-12-15 06:26:37.824448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.712 [2024-12-15 06:26:37.836152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.712 [2024-12-15 06:26:37.836581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.712 [2024-12-15 06:26:37.836599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.712 [2024-12-15 06:26:37.836607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.712 [2024-12-15 06:26:37.836781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.712 [2024-12-15 06:26:37.836955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.712 [2024-12-15 06:26:37.836965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.712 [2024-12-15 06:26:37.836972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.712 [2024-12-15 06:26:37.836979] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.971 [2024-12-15 06:26:37.849113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.971 [2024-12-15 06:26:37.849483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.971 [2024-12-15 06:26:37.849501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.971 [2024-12-15 06:26:37.849508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.971 [2024-12-15 06:26:37.849668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.971 [2024-12-15 06:26:37.849832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.971 [2024-12-15 06:26:37.849842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.971 [2024-12-15 06:26:37.849848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.971 [2024-12-15 06:26:37.849855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.971 [2024-12-15 06:26:37.861965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.971 [2024-12-15 06:26:37.862400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.971 [2024-12-15 06:26:37.862419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.971 [2024-12-15 06:26:37.862427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.971 [2024-12-15 06:26:37.862601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.971 [2024-12-15 06:26:37.862771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.971 [2024-12-15 06:26:37.862781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.971 [2024-12-15 06:26:37.862788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.971 [2024-12-15 06:26:37.862794] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.971 10072.33 IOPS, 39.35 MiB/s [2024-12-15T05:26:38.111Z] [2024-12-15 06:26:37.876348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.971 [2024-12-15 06:26:37.876700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.971 [2024-12-15 06:26:37.876718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.971 [2024-12-15 06:26:37.876727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.971 [2024-12-15 06:26:37.876900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.971 [2024-12-15 06:26:37.877081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.971 [2024-12-15 06:26:37.877092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.971 [2024-12-15 06:26:37.877099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.971 [2024-12-15 06:26:37.877106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.971 [2024-12-15 06:26:37.889330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.971 [2024-12-15 06:26:37.889801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.971 [2024-12-15 06:26:37.889847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.971 [2024-12-15 06:26:37.889871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.971 [2024-12-15 06:26:37.890346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.971 [2024-12-15 06:26:37.890523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.971 [2024-12-15 06:26:37.890533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.971 [2024-12-15 06:26:37.890547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.890554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.902319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.902754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.902805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.902830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.903377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.903548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.903558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.903565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.903572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.915163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.915516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.915535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.915543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.915711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.915880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.915890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.915896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.915903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.928024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.928336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.928353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.928360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.928519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.928679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.928688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.928695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.928701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.940838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.941255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.941273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.941281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.941439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.941599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.941608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.941614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.941621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.953612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.954025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.954073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.954097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.954664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.954824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.954834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.954840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.954846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.966397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.966813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.966859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.966884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.967485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.967672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.967682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.967688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.967696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.981516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.982016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.982053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.982309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.982565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.982578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.982588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.982598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:37.994402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:37.994828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:37.994877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:37.994901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:37.995499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:37.995982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:37.995995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:37.996003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:37.996010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.007138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.007484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.007501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.007509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.007668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.007828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.007838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.007843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.007849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.019985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.020313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.020330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.020338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.020497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.020660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.020670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.020676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.020683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.032803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.033217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.033235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.033242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.033401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.033561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.033571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.033577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.033583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.045656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.046068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.046086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.046094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.046266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.046440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.046450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.046456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.046463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.058632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.059050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.059067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.059075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.059259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.059429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.059439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.059450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.059457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.071491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.071820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.071866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.071891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.072491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.072987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.073001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.073008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.073014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.084238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.084665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.084711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.084735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.085188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.085359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.085368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.085375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.085381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.972 [2024-12-15 06:26:38.096976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.972 [2024-12-15 06:26:38.097329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.972 [2024-12-15 06:26:38.097376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:17.972 [2024-12-15 06:26:38.097399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:17.972 [2024-12-15 06:26:38.097890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:17.972 [2024-12-15 06:26:38.098072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.972 [2024-12-15 06:26:38.098082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.972 [2024-12-15 06:26:38.098089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.972 [2024-12-15 06:26:38.098095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.110043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.110432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.110450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.110458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.110627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.110796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.110806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.110812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.110820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.122813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.123227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.123252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.123411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.123571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.123580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.123586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.123592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.135570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.135980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.135988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.136177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.136346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.136356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.136362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.136369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.148308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.148626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.148643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.148654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.148813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.148974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.148983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.148989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.149001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.161128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.161542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.161560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.161567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.161727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.161887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.161897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.161903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.161909] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.173903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.174328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.174345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.174353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.174513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.232 [2024-12-15 06:26:38.174673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.232 [2024-12-15 06:26:38.174682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.232 [2024-12-15 06:26:38.174689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.232 [2024-12-15 06:26:38.174695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.232 [2024-12-15 06:26:38.186679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.232 [2024-12-15 06:26:38.187021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.232 [2024-12-15 06:26:38.187038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.232 [2024-12-15 06:26:38.187046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.232 [2024-12-15 06:26:38.187206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.187369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.187379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.187385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.187391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.199429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.199805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.199829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.200364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.200535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.200545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.200552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.200559] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.214570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.215092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.215114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.215125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.215380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.215637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.215650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.215660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.215670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.227597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.228027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.228075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.228098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.228578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.228748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.228758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.228767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.228775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.240421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.240830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.240847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.240855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.241020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.241205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.241214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.241221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.241228] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.253300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.253717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.253737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.253744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.253905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.254089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.254100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.254106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.254113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.266079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.266401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.266419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.266427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.266586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.266746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.266756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.266762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.266769] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.278919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.279346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.279392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.279416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.280016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.280542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.280552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.280558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.280565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.291789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.292192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.292210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.292218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.292387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.292559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.292568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.292575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.292581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.304914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.305344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.305352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.305525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.305698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.233 [2024-12-15 06:26:38.305708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.233 [2024-12-15 06:26:38.305715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.233 [2024-12-15 06:26:38.305722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.233 [2024-12-15 06:26:38.317810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.233 [2024-12-15 06:26:38.318170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.233 [2024-12-15 06:26:38.318188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.233 [2024-12-15 06:26:38.318200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.233 [2024-12-15 06:26:38.318369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.233 [2024-12-15 06:26:38.318537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.234 [2024-12-15 06:26:38.318547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.234 [2024-12-15 06:26:38.318553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.234 [2024-12-15 06:26:38.318560] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.234 [2024-12-15 06:26:38.330542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.234 [2024-12-15 06:26:38.330949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.234 [2024-12-15 06:26:38.330987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.234 [2024-12-15 06:26:38.331028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.234 [2024-12-15 06:26:38.331554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.234 [2024-12-15 06:26:38.331724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.234 [2024-12-15 06:26:38.331734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.234 [2024-12-15 06:26:38.331740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.234 [2024-12-15 06:26:38.331747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.234 [2024-12-15 06:26:38.343274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.234 [2024-12-15 06:26:38.343581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.234 [2024-12-15 06:26:38.343599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.234 [2024-12-15 06:26:38.343606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.234 [2024-12-15 06:26:38.343765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.234 [2024-12-15 06:26:38.343925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.234 [2024-12-15 06:26:38.343934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.234 [2024-12-15 06:26:38.343940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.234 [2024-12-15 06:26:38.343946] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.234 [2024-12-15 06:26:38.356072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.234 [2024-12-15 06:26:38.356487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.234 [2024-12-15 06:26:38.356504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.234 [2024-12-15 06:26:38.356511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.234 [2024-12-15 06:26:38.356671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.234 [2024-12-15 06:26:38.356833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.234 [2024-12-15 06:26:38.356843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.234 [2024-12-15 06:26:38.356849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.234 [2024-12-15 06:26:38.356856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.234 [2024-12-15 06:26:38.368961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.234 [2024-12-15 06:26:38.369341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.234 [2024-12-15 06:26:38.369358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.234 [2024-12-15 06:26:38.369367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.369535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.369706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.369716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.369723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.369729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.381732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.382144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.382162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.382169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.382328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.382488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.382497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.382503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.382510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.394615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.395040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.395059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.395067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.395236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.395405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.395414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.395425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.395432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.407501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.407908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.407953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.407977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.408548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.408719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.408728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.408735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.408742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.420311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.420732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.420756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.420915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.421099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.421109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.421116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.421123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.433164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.433556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.433573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.433581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.433740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.433899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.433908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.433915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.433921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.445892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.446304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.446322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.446329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.446488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.446647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.446656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.446662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.446668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.458735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.459163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.459171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.459330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.459490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.459499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.459505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.459512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.471491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.471910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.471956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.471980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.472579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.473074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.473084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.473091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.495 [2024-12-15 06:26:38.473098] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.495 [2024-12-15 06:26:38.484384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.495 [2024-12-15 06:26:38.484779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.495 [2024-12-15 06:26:38.484797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.495 [2024-12-15 06:26:38.484808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.495 [2024-12-15 06:26:38.484977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.495 [2024-12-15 06:26:38.485196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.495 [2024-12-15 06:26:38.485207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.495 [2024-12-15 06:26:38.485214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.485221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.497233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.497627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.497652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.497811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.497971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.497980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.497986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.497998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.510029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.510442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.510459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.510467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.510626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.510786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.510796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.510802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.510808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.522777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.523108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.523137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.523145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.523305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.523471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.523481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.523487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.523493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.535617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.536039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.536071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.536095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.536679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.536999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.537010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.537016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.537023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.548399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.548823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.548868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.548891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.549338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.549509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.549519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.549525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.549532] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.561221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.561567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.561586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.561594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.561762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.561931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.561940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.561950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.561958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.574328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.574646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.574664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.574672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.574842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.575016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.575042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.575049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.575057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.587267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.587591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.587609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.587617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.587777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.587936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.587946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.587952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.587959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.600236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.600688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.600714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.600882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.601073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.601083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.601090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.496 [2024-12-15 06:26:38.601096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.496 [2024-12-15 06:26:38.613472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.496 [2024-12-15 06:26:38.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.496 [2024-12-15 06:26:38.613892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.496 [2024-12-15 06:26:38.613900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.496 [2024-12-15 06:26:38.614080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.496 [2024-12-15 06:26:38.614255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.496 [2024-12-15 06:26:38.614265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.496 [2024-12-15 06:26:38.614272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.497 [2024-12-15 06:26:38.614279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.497 [2024-12-15 06:26:38.626408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.497 [2024-12-15 06:26:38.626813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.497 [2024-12-15 06:26:38.626858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.497 [2024-12-15 06:26:38.626882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.497 [2024-12-15 06:26:38.627425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.497 [2024-12-15 06:26:38.627601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.497 [2024-12-15 06:26:38.627611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.497 [2024-12-15 06:26:38.627618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.497 [2024-12-15 06:26:38.627625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.757 [2024-12-15 06:26:38.639501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.757 [2024-12-15 06:26:38.639905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.757 [2024-12-15 06:26:38.639924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.757 [2024-12-15 06:26:38.639932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.757 [2024-12-15 06:26:38.640112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.640286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.640297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.640304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.640311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.652690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.653130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.653149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.653162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.653347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.653533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.653544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.653551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.653559] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.665783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.666151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.666159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.666345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.666531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.666542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.666549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.666557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.679058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.679478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.679497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.679505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.679689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.679875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.679886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.679893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.679900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.692088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.692518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.692537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.692545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.692718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.692896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.692906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.692912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.692919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.705297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.705732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.705751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.705759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.705943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.706134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.706145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.706152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.706159] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.718312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.718677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.718695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.718703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.718876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.719057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.719068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.719074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.719081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.731353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.731777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.731784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.731957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.732136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.732151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.732162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.732169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.744422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.744847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.744875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.745064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.745250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.745260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.745267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.745275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.757513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.757955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.757973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.757980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.758160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.758334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.758344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.758351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.758358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.770517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.770948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.770966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.770974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.771155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.771330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.771340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.771347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.758 [2024-12-15 06:26:38.771354] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.758 [2024-12-15 06:26:38.783658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.758 [2024-12-15 06:26:38.784069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.758 [2024-12-15 06:26:38.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.758 [2024-12-15 06:26:38.784098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.758 [2024-12-15 06:26:38.784282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.758 [2024-12-15 06:26:38.784467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.758 [2024-12-15 06:26:38.784477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.758 [2024-12-15 06:26:38.784484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.784491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.796737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.797167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.797186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.797194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.797367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.797541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.797551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.797557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.797564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.809770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.810124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.810143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.810151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.810324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.810498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.810509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.810515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.810522] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.822797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.823240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.823260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.823272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.823456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.823641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.823651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.823658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.823665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.835961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.836383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.836431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.836455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.836911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.837092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.837102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.837109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.837116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.848990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.849294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.849338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.849361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.849878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.850054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.850064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.850071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.850078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.861895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.862213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.862231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.862238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.862398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.862562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.862571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.862578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.862584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.875929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 7554.25 IOPS, 29.51 MiB/s [2024-12-15T05:26:38.899Z] [2024-12-15 06:26:38.876281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.876299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.876307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.876467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.876627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.876636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.876642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.876649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.759 [2024-12-15 06:26:38.888799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.759 [2024-12-15 06:26:38.889077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.759 [2024-12-15 06:26:38.889095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:18.759 [2024-12-15 06:26:38.889104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:18.759 [2024-12-15 06:26:38.889274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:18.759 [2024-12-15 06:26:38.889442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.759 [2024-12-15 06:26:38.889451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.759 [2024-12-15 06:26:38.889458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.759 [2024-12-15 06:26:38.889464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.901772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.902058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.902078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.902087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.902255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.902423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.902433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.902443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.902451] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.914635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.914899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.914917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.914924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.915108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.915278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.915288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.915294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.915300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.927417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.927694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.927712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.927721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.927880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.928062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.928073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.928080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.928088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.940198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.940521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.940562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.940589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.941112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.941283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.941292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.941299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.941305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.953204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.953613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.953631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.953639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.953812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.953986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.954004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.954011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.954018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.966224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.966580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.020 [2024-12-15 06:26:38.966597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.020 [2024-12-15 06:26:38.966605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.020 [2024-12-15 06:26:38.966778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.020 [2024-12-15 06:26:38.966952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.020 [2024-12-15 06:26:38.966963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.020 [2024-12-15 06:26:38.966969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.020 [2024-12-15 06:26:38.966976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.020 [2024-12-15 06:26:38.979186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.020 [2024-12-15 06:26:38.979545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:38.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:38.979572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:38.979745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:38.979921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:38.979932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:38.979938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:38.979945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:38.992456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:38.992896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:38.992915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:38.992927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:38.993140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:38.993339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:38.993349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:38.993358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:38.993366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.005488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.005934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.005979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.006018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.006452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.006627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.006637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.006644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.006651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.018504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.018931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.018949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.018957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.019151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.019326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.019335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.019342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.019349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.031374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.031790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.031834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.031858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.032342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.032516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.032527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.032533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.032540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.044226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.044641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.044658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.044666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.044825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.044984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.044998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.045006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.045013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.056950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.057360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.057399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.057425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.058022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.058303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.058313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.058318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.058325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.069749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.070162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.070181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.070189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.070349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.070509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.070518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.070529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.070535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.082513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.082925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.082943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.082950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.083142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.083316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.083326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.083333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.083340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.095474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.095916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.021 [2024-12-15 06:26:39.095961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.021 [2024-12-15 06:26:39.095984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.021 [2024-12-15 06:26:39.096474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.021 [2024-12-15 06:26:39.096644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.021 [2024-12-15 06:26:39.096654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.021 [2024-12-15 06:26:39.096661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.021 [2024-12-15 06:26:39.096667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.021 [2024-12-15 06:26:39.108297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.021 [2024-12-15 06:26:39.108690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.022 [2024-12-15 06:26:39.108707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.022 [2024-12-15 06:26:39.108715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.022 [2024-12-15 06:26:39.108875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.022 [2024-12-15 06:26:39.109056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.022 [2024-12-15 06:26:39.109066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.022 [2024-12-15 06:26:39.109073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.022 [2024-12-15 06:26:39.109081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.022 [2024-12-15 06:26:39.121036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.022 [2024-12-15 06:26:39.121376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.022 [2024-12-15 06:26:39.121393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.022 [2024-12-15 06:26:39.121401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.022 [2024-12-15 06:26:39.121561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.022 [2024-12-15 06:26:39.121721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.022 [2024-12-15 06:26:39.121730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.022 [2024-12-15 06:26:39.121736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.022 [2024-12-15 06:26:39.121742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.022 [2024-12-15 06:26:39.133847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.022 [2024-12-15 06:26:39.134265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.022 [2024-12-15 06:26:39.134282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.022 [2024-12-15 06:26:39.134290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.022 [2024-12-15 06:26:39.134450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.022 [2024-12-15 06:26:39.134611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.022 [2024-12-15 06:26:39.134620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.022 [2024-12-15 06:26:39.134626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.022 [2024-12-15 06:26:39.134633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.022 [2024-12-15 06:26:39.146666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.022 [2024-12-15 06:26:39.147077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.022 [2024-12-15 06:26:39.147094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.022 [2024-12-15 06:26:39.147103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.022 [2024-12-15 06:26:39.147262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.022 [2024-12-15 06:26:39.147422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.022 [2024-12-15 06:26:39.147432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.022 [2024-12-15 06:26:39.147438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.022 [2024-12-15 06:26:39.147445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.159614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.160024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.160041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.160055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.160215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.160375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.160386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.160392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.160399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.172570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.172983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.173006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.173014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.173174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.173333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.173343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.173349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.173355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.185405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.185796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.185813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.185820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.185980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.186171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.186182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.186188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.186195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.198128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.198466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.198483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.198490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.198649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.198812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.198822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.198828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.198834] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.210981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.211384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.211430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.211453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.211908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.212250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.212270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.212285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.212299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.225708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.226230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.226265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.226521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.283 [2024-12-15 06:26:39.226778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.283 [2024-12-15 06:26:39.226791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.283 [2024-12-15 06:26:39.226802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.283 [2024-12-15 06:26:39.226812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.283 [2024-12-15 06:26:39.238831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.283 [2024-12-15 06:26:39.239240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.283 [2024-12-15 06:26:39.239258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.283 [2024-12-15 06:26:39.239266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.283 [2024-12-15 06:26:39.239439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.239613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.239624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.239634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.239641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.251749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.252192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.252242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.252266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.252799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.252961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.252972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.252978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.252985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.264650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.265063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.265081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.265088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.265248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.265408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.265418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.265424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.265430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.277508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.277923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.278008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.278594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.278783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.278793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.278799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.278805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.290366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.290714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.290731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.290738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.290897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.291065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.291075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.291081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.291088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.303170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.303588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.303633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.303657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.304032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.304195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.304204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.304210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.304217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.316031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.316462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.316509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.316532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.317006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.317191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.317202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.317208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.317215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.328771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.329172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.329219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.329251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.329820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.329981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.329990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.330004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.330011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.341536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.341950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.341968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.341976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.342169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.342343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.342353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.342360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.342366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.354520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.354939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.354956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.354965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.355159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.355333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.284 [2024-12-15 06:26:39.355343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.284 [2024-12-15 06:26:39.355349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.284 [2024-12-15 06:26:39.355357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.284 [2024-12-15 06:26:39.367306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.284 [2024-12-15 06:26:39.367733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.284 [2024-12-15 06:26:39.367779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.284 [2024-12-15 06:26:39.367803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.284 [2024-12-15 06:26:39.368247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.284 [2024-12-15 06:26:39.368421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.285 [2024-12-15 06:26:39.368431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.285 [2024-12-15 06:26:39.368437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.285 [2024-12-15 06:26:39.368444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.285 [2024-12-15 06:26:39.380198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.285 [2024-12-15 06:26:39.380588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.285 [2024-12-15 06:26:39.380605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.285 [2024-12-15 06:26:39.380613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.285 [2024-12-15 06:26:39.380772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.285 [2024-12-15 06:26:39.380933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.285 [2024-12-15 06:26:39.380942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.285 [2024-12-15 06:26:39.380949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.285 [2024-12-15 06:26:39.380955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.285 [2024-12-15 06:26:39.393065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.285 [2024-12-15 06:26:39.393462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.285 [2024-12-15 06:26:39.393480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.285 [2024-12-15 06:26:39.393487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.285 [2024-12-15 06:26:39.393655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.285 [2024-12-15 06:26:39.393824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.285 [2024-12-15 06:26:39.393834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.285 [2024-12-15 06:26:39.393840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.285 [2024-12-15 06:26:39.393848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.285 [2024-12-15 06:26:39.405834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.285 [2024-12-15 06:26:39.406262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.285 [2024-12-15 06:26:39.406308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.285 [2024-12-15 06:26:39.406331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.285 [2024-12-15 06:26:39.406913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.285 [2024-12-15 06:26:39.407329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.285 [2024-12-15 06:26:39.407339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.285 [2024-12-15 06:26:39.407348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.285 [2024-12-15 06:26:39.407355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.285 [2024-12-15 06:26:39.418781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.285 [2024-12-15 06:26:39.419203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.285 [2024-12-15 06:26:39.419220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.285 [2024-12-15 06:26:39.419228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.285 [2024-12-15 06:26:39.419387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.285 [2024-12-15 06:26:39.419548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.285 [2024-12-15 06:26:39.419557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.285 [2024-12-15 06:26:39.419563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.285 [2024-12-15 06:26:39.419571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.431606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.432025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.432042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.432050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.432210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.432370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.432380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.432386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.432393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.444367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.444696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.444714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.444721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.444880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.445047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.445057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.445064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.445070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.457207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.457623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.457669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.457693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.458293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.458680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.458691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.458697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.458704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.469955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.470374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.470414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.470439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.471039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.471590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.471599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.471606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.471613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.482785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.483207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.483254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.483279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.483714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.483876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.483885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.483891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.483898] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.495743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.496185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.496231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.546 [2024-12-15 06:26:39.496263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.546 [2024-12-15 06:26:39.496688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.546 [2024-12-15 06:26:39.496849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.546 [2024-12-15 06:26:39.496858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.546 [2024-12-15 06:26:39.496866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.546 [2024-12-15 06:26:39.496873] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.546 [2024-12-15 06:26:39.508543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.546 [2024-12-15 06:26:39.508951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.546 [2024-12-15 06:26:39.508968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.508976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.509143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.509304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.509314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.509320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.509327] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.521394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.521807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.521852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.521875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.522257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.522419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.522429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.522435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.522441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.534149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.534565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.534610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.534633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.535232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.535456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.535465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.535471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.535477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.549328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.549821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.549843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.549853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.550117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.550375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.550387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.550397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.550407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.562279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.562694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.562712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.562720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.562888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.563064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.563075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.563082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.563089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.575098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.575529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.575575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.575599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.576198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.576726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.576745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.576766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.576780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.590196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.590711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.590781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.591380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.591869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.591882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.591893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.591902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.603234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.603654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.603680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.603847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.604022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.604033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.604039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.604047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.616208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.616633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.616651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.616660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.616833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.617015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.617026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.617033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.617041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.629249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.629669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.629715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.629739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.630344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.547 [2024-12-15 06:26:39.630878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.547 [2024-12-15 06:26:39.630889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.547 [2024-12-15 06:26:39.630896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.547 [2024-12-15 06:26:39.630903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.547 [2024-12-15 06:26:39.642443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.547 [2024-12-15 06:26:39.642844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.547 [2024-12-15 06:26:39.642863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.547 [2024-12-15 06:26:39.642871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.547 [2024-12-15 06:26:39.643049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.548 [2024-12-15 06:26:39.643224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.548 [2024-12-15 06:26:39.643235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.548 [2024-12-15 06:26:39.643242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.548 [2024-12-15 06:26:39.643249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.548 [2024-12-15 06:26:39.655503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.548 [2024-12-15 06:26:39.655918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.548 [2024-12-15 06:26:39.655956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.548 [2024-12-15 06:26:39.655982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.548 [2024-12-15 06:26:39.656580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.548 [2024-12-15 06:26:39.656876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.548 [2024-12-15 06:26:39.656886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.548 [2024-12-15 06:26:39.656893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.548 [2024-12-15 06:26:39.656899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.548 [2024-12-15 06:26:39.670375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.548 [2024-12-15 06:26:39.670901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.548 [2024-12-15 06:26:39.670953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.548 [2024-12-15 06:26:39.670986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.548 [2024-12-15 06:26:39.671588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.548 [2024-12-15 06:26:39.671907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.548 [2024-12-15 06:26:39.671920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.548 [2024-12-15 06:26:39.671930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.548 [2024-12-15 06:26:39.671940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.809 [2024-12-15 06:26:39.683315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.809 [2024-12-15 06:26:39.683741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.809 [2024-12-15 06:26:39.683759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.809 [2024-12-15 06:26:39.683768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.809 [2024-12-15 06:26:39.683936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.809 [2024-12-15 06:26:39.684113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.809 [2024-12-15 06:26:39.684123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.809 [2024-12-15 06:26:39.684129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.809 [2024-12-15 06:26:39.684136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.809 [2024-12-15 06:26:39.696223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.809 [2024-12-15 06:26:39.696560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.809 [2024-12-15 06:26:39.696577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.809 [2024-12-15 06:26:39.696585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.809 [2024-12-15 06:26:39.696743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.809 [2024-12-15 06:26:39.696903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.809 [2024-12-15 06:26:39.696912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.809 [2024-12-15 06:26:39.696918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.809 [2024-12-15 06:26:39.696925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.809 [2024-12-15 06:26:39.709078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.809 [2024-12-15 06:26:39.709484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.809 [2024-12-15 06:26:39.709501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.809 [2024-12-15 06:26:39.709509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.809 [2024-12-15 06:26:39.709668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.809 [2024-12-15 06:26:39.709831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.809 [2024-12-15 06:26:39.709840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.809 [2024-12-15 06:26:39.709846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.809 [2024-12-15 06:26:39.709853] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.809 [2024-12-15 06:26:39.721946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.722306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.722323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.722331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.722491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.722651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.722661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.722667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.722674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.734816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.735230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.735274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.735299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.735822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.735983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.735999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.736005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.736012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.747547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.747990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.748074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.748423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.748585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.748594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.748621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.748628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.760451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.760862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.760931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.761532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.762133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.762160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.762192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.762199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.773413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.773914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.773938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.774551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.775150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.775179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.775186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.775193] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.786293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.786651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.786669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.786676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.786845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.787019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.787030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.787036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.787043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.799112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.799452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.799470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.799477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.799638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.799797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.799807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.799813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.799819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.811895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.812303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.812322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.812330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.812499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.812667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.812677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.812683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.812690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.824719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.825125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.825172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.825196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.825779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.826339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.826350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.826356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.826363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.810 [2024-12-15 06:26:39.837457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.810 [2024-12-15 06:26:39.837882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.810 [2024-12-15 06:26:39.837928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.810 [2024-12-15 06:26:39.837960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.810 [2024-12-15 06:26:39.838404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.810 [2024-12-15 06:26:39.838567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.810 [2024-12-15 06:26:39.838576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.810 [2024-12-15 06:26:39.838585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.810 [2024-12-15 06:26:39.838592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.850372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.850800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.850818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.850827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.851001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.851192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.851201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.851208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.851215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.863359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.863777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.863794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.863802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.863971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.864165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.864176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.864182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.864189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.876246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.876681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.876699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.876707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.876866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.877052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.877063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.877070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.877077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 6043.40 IOPS, 23.61 MiB/s [2024-12-15T05:26:39.951Z] [2024-12-15 06:26:39.889026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.889419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.889437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.889445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.889605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.889765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.889775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.889781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.889788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.901806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.902228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.902274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.902297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.902881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.903481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.903505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.903511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.903519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.914549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.914884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.914892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.915076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.915246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.915256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.915266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.915273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.927276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.927687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.927731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.927756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.928283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.928446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.928455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.928462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.928469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.811 [2024-12-15 06:26:39.940109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.811 [2024-12-15 06:26:39.940524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.811 [2024-12-15 06:26:39.940541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:19.811 [2024-12-15 06:26:39.940548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:19.811 [2024-12-15 06:26:39.940707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:19.811 [2024-12-15 06:26:39.940867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.811 [2024-12-15 06:26:39.940876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.811 [2024-12-15 06:26:39.940882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.811 [2024-12-15 06:26:39.940889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.073 [2024-12-15 06:26:39.952996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.073 [2024-12-15 06:26:39.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.073 [2024-12-15 06:26:39.953440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.073 [2024-12-15 06:26:39.953448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.073 [2024-12-15 06:26:39.953617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.073 [2024-12-15 06:26:39.953786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.073 [2024-12-15 06:26:39.953796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.073 [2024-12-15 06:26:39.953803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.073 [2024-12-15 06:26:39.953809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.073 [2024-12-15 06:26:39.965738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.073 [2024-12-15 06:26:39.966159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.073 [2024-12-15 06:26:39.966206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.073 [2024-12-15 06:26:39.966230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.073 [2024-12-15 06:26:39.966662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.073 [2024-12-15 06:26:39.966822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.073 [2024-12-15 06:26:39.966831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.073 [2024-12-15 06:26:39.966837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.073 [2024-12-15 06:26:39.966843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.073 [2024-12-15 06:26:39.978635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.073 [2024-12-15 06:26:39.979047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:39.979065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:39.979072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:39.979231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:39.979391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:39.979400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:39.979406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:39.979412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:39.991423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:39.991761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:39.991778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:39.991786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:39.991946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:39.992134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:39.992144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:39.992151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:39.992157] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.004824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.005385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.005441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.005466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.005774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.006045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.006059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.006070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.006079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.017929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.018228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.018248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.018256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.018429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.018603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.018614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.018621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.018628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.031519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.031862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.031881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.031889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.032069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.032244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.032255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.032263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.032270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.044535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.044968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.044987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.045001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.045176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.045352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.045361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.045368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.045375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.057586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.057988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.058014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.058023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.058206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.058376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.058385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.058392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.058398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.070636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.070974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.070997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.071006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.071179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.071353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.071363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.071370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.071376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.083649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.084085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.084104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.084126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.084302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.084479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.084489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.084499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.084507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.096666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.097077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.097095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.097104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.074 [2024-12-15 06:26:40.097278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.074 [2024-12-15 06:26:40.097452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.074 [2024-12-15 06:26:40.097462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.074 [2024-12-15 06:26:40.097468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.074 [2024-12-15 06:26:40.097475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.074 [2024-12-15 06:26:40.109732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.074 [2024-12-15 06:26:40.110110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.074 [2024-12-15 06:26:40.110129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.074 [2024-12-15 06:26:40.110137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.110311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.110484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.110495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.110501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.110510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.122705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.123081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.123100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.123108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.123281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.123455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.123465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.123472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.123479] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.135685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.136061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.136080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.136088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.136261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.136435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.136445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.136451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.136458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.148662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.149063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.149082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.149090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.149258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.149426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.149436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.149442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.149449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.161652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.162070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.162088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.162096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.162277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.162445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.162455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.162462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.162468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.174662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.175058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.175077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.175088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.175262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.175436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.175445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.175452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.175459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.187630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.188061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.188070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.188244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.188417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.188427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.188434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.188440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.075 [2024-12-15 06:26:40.200619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.075 [2024-12-15 06:26:40.201029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.075 [2024-12-15 06:26:40.201049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.075 [2024-12-15 06:26:40.201058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.075 [2024-12-15 06:26:40.201234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.075 [2024-12-15 06:26:40.201408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.075 [2024-12-15 06:26:40.201418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.075 [2024-12-15 06:26:40.201425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.075 [2024-12-15 06:26:40.201432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.337 [2024-12-15 06:26:40.213636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.337 [2024-12-15 06:26:40.214012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.337 [2024-12-15 06:26:40.214029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.337 [2024-12-15 06:26:40.214039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.337 [2024-12-15 06:26:40.214212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.337 [2024-12-15 06:26:40.214390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.337 [2024-12-15 06:26:40.214401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.337 [2024-12-15 06:26:40.214407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.337 [2024-12-15 06:26:40.214414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.337 [2024-12-15 06:26:40.226618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.337 [2024-12-15 06:26:40.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.337 [2024-12-15 06:26:40.227022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.337 [2024-12-15 06:26:40.227030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.337 [2024-12-15 06:26:40.227203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.337 [2024-12-15 06:26:40.227376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.337 [2024-12-15 06:26:40.227386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.337 [2024-12-15 06:26:40.227393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.337 [2024-12-15 06:26:40.227401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.337 [2024-12-15 06:26:40.239567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.337 [2024-12-15 06:26:40.239973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.337 [2024-12-15 06:26:40.239998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.337 [2024-12-15 06:26:40.240007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.337 [2024-12-15 06:26:40.240180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.337 [2024-12-15 06:26:40.240353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.337 [2024-12-15 06:26:40.240363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.337 [2024-12-15 06:26:40.240370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.337 [2024-12-15 06:26:40.240376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.337 [2024-12-15 06:26:40.252763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.337 [2024-12-15 06:26:40.253176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.337 [2024-12-15 06:26:40.253195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.337 [2024-12-15 06:26:40.253204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.337 [2024-12-15 06:26:40.253378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.337 [2024-12-15 06:26:40.253551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.337 [2024-12-15 06:26:40.253561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.337 [2024-12-15 06:26:40.253571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.337 [2024-12-15 06:26:40.253579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.337 [2024-12-15 06:26:40.265789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.337 [2024-12-15 06:26:40.266083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.266103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.266111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.266285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.266458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.266469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.266476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.266482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.278890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.279260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.279288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.279462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.279635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.279646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.279653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.279659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.291879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.292274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.292293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.292301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.292475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.292648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.292658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.292665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.292672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.304924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.305344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.305363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.305371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.305566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.305758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.305768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.305775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.305781] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.317901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.318197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.318224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.318397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.318571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.318582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.318588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.318595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.330962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.331296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.331315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.331323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.331497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.331671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.331681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.331688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.331694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.344031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.344415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.344433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.344445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.344619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.344793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.344803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.344810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.344817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.357141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.357545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.357571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.357746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.357919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.357929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.357935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.357942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.370147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.370432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.370450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.370458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.370637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.370811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.370820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.370827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.370833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.383231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.383568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.383587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.338 [2024-12-15 06:26:40.383595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.338 [2024-12-15 06:26:40.383768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.338 [2024-12-15 06:26:40.383947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.338 [2024-12-15 06:26:40.383957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.338 [2024-12-15 06:26:40.383964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.338 [2024-12-15 06:26:40.383971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.338 [2024-12-15 06:26:40.396339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.338 [2024-12-15 06:26:40.396782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.338 [2024-12-15 06:26:40.396800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.396808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.396982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.397161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.397177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.397184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.397191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.339 [2024-12-15 06:26:40.409401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.339 [2024-12-15 06:26:40.409820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.339 [2024-12-15 06:26:40.409839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.409846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.410024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.410199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.410210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.410217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.410223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.339 [2024-12-15 06:26:40.422426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.339 [2024-12-15 06:26:40.422875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.339 [2024-12-15 06:26:40.422893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.422902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.423081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.423256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.423266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.423280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.423287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.339 [2024-12-15 06:26:40.435472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.339 [2024-12-15 06:26:40.435831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.339 [2024-12-15 06:26:40.435849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.435856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.436035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.436210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.436220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.436227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.436233] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.339 [2024-12-15 06:26:40.448584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.339 [2024-12-15 06:26:40.448984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.339 [2024-12-15 06:26:40.449009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.449018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.449191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.449365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.449375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.449381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.449388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.339 [2024-12-15 06:26:40.461584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.339 [2024-12-15 06:26:40.462006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.339 [2024-12-15 06:26:40.462023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.339 [2024-12-15 06:26:40.462031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.339 [2024-12-15 06:26:40.462205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.339 [2024-12-15 06:26:40.462379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.339 [2024-12-15 06:26:40.462389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.339 [2024-12-15 06:26:40.462395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.339 [2024-12-15 06:26:40.462402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.601 [2024-12-15 06:26:40.474603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.601 [2024-12-15 06:26:40.475033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.601 [2024-12-15 06:26:40.475052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.601 [2024-12-15 06:26:40.475060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.601 [2024-12-15 06:26:40.475234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.601 [2024-12-15 06:26:40.475408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.601 [2024-12-15 06:26:40.475417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.601 [2024-12-15 06:26:40.475425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.601 [2024-12-15 06:26:40.475431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.601 [2024-12-15 06:26:40.487629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.601 [2024-12-15 06:26:40.488052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.601 [2024-12-15 06:26:40.488070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.601 [2024-12-15 06:26:40.488078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.601 [2024-12-15 06:26:40.488252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.601 [2024-12-15 06:26:40.488426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.601 [2024-12-15 06:26:40.488436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.601 [2024-12-15 06:26:40.488442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.601 [2024-12-15 06:26:40.488449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.601 [2024-12-15 06:26:40.500642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.601 [2024-12-15 06:26:40.501076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.601 [2024-12-15 06:26:40.501094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.601 [2024-12-15 06:26:40.501102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.601 [2024-12-15 06:26:40.501277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.601 [2024-12-15 06:26:40.501451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.601 [2024-12-15 06:26:40.501461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.601 [2024-12-15 06:26:40.501468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.601 [2024-12-15 06:26:40.501475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.601 [2024-12-15 06:26:40.513628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.601 [2024-12-15 06:26:40.514050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.601 [2024-12-15 06:26:40.514069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.601 [2024-12-15 06:26:40.514081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.601 [2024-12-15 06:26:40.514255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.601 [2024-12-15 06:26:40.514428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.601 [2024-12-15 06:26:40.514438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.601 [2024-12-15 06:26:40.514444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.601 [2024-12-15 06:26:40.514451] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.601 [2024-12-15 06:26:40.526644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.601 [2024-12-15 06:26:40.527069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.527087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.527096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.527270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.527444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.527454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.527460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.527467] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1201652 Killed "${NVMF_APP[@]}" "$@" 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1203008 00:36:20.602 [2024-12-15 06:26:40.539662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1203008 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:20.602 [2024-12-15 06:26:40.540002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.540022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.540030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.540204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1203008 ']' 00:36:20.602 [2024-12-15 06:26:40.540377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.540394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.540403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.540411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.602 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.602 [2024-12-15 06:26:40.552776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.553112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.553131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.553139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.553311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.553485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.553494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.553500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.553506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.565872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.566200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.566219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.566227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.566400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.566574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.566584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.566591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.566599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.578980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.579333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.579351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.579359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.579536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.579710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.579721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.579727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.579734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.585072] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:20.602 [2024-12-15 06:26:40.585115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.602 [2024-12-15 06:26:40.592097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.592485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.592504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.592512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.592687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.592863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.592873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.592880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.592888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.605105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.605514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.605534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.605543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.605717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.605891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.605902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.605909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.605917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.618123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.618526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.618545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.618557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.618733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.602 [2024-12-15 06:26:40.618909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.602 [2024-12-15 06:26:40.618918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.602 [2024-12-15 06:26:40.618925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.602 [2024-12-15 06:26:40.618932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.602 [2024-12-15 06:26:40.631126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.602 [2024-12-15 06:26:40.631509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.602 [2024-12-15 06:26:40.631526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.602 [2024-12-15 06:26:40.631534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.602 [2024-12-15 06:26:40.631708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.631881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.631891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.631898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.631908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.644114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.644476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.644502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.644676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.644851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.644861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.644868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.644875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.657082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.657494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.657513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.657522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.657696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.657876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.657887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.657896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.657903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.664958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:20.603 [2024-12-15 06:26:40.670110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.670513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.670531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.670539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.670714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.670889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.670900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.670908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.670915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.683099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.683438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.683457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.683465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.683638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.683813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.683824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.683830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.683837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.687486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:20.603 [2024-12-15 06:26:40.687516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:20.603 [2024-12-15 06:26:40.687522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:20.603 [2024-12-15 06:26:40.687528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:20.603 [2024-12-15 06:26:40.687534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:20.603 [2024-12-15 06:26:40.688696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.603 [2024-12-15 06:26:40.688804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.603 [2024-12-15 06:26:40.688806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.603 [2024-12-15 06:26:40.696162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.696580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.696590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.696767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.696945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.696957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.696966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.696974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.709182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.709646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.709667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.709679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.709856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.710038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.710049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.710057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.710066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.722253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.722694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.722717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.722727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.722904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.723085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.723096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.723105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.723112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.603 [2024-12-15 06:26:40.735312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.603 [2024-12-15 06:26:40.735742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.603 [2024-12-15 06:26:40.735764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.603 [2024-12-15 06:26:40.735781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.603 [2024-12-15 06:26:40.735957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.603 [2024-12-15 06:26:40.736142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.603 [2024-12-15 06:26:40.736154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.603 [2024-12-15 06:26:40.736161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.603 [2024-12-15 06:26:40.736168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.864 [2024-12-15 06:26:40.748370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.864 [2024-12-15 06:26:40.748820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.864 [2024-12-15 06:26:40.748842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.864 [2024-12-15 06:26:40.748851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.864 [2024-12-15 06:26:40.749034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.864 [2024-12-15 06:26:40.749212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.864 [2024-12-15 06:26:40.749223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.864 [2024-12-15 06:26:40.749231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.864 [2024-12-15 06:26:40.749238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.864 [2024-12-15 06:26:40.761418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.864 [2024-12-15 06:26:40.761825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.864 [2024-12-15 06:26:40.761844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.864 [2024-12-15 06:26:40.761853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.864 [2024-12-15 06:26:40.762033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.864 [2024-12-15 06:26:40.762209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.762220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.762227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.762234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 [2024-12-15 06:26:40.774433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.774862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.774881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.774889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.775069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.775250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.775260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.775267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.775274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 [2024-12-15 06:26:40.787471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.787837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.787857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.787865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.788044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.788219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.788229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.788236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.788243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 [2024-12-15 06:26:40.800455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.800894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.800914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.800922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.801101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.801276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.801287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.801294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.801303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 [2024-12-15 06:26:40.813505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.813797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.813816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.813825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.814009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.814183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.814194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.814201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.814208] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 [2024-12-15 06:26:40.826569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.826904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.826922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.826930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.827109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.827285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.827295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.827303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.827310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 [2024-12-15 06:26:40.828124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 [2024-12-15 06:26:40.839672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.839950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.839968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.839977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.840156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.840331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.840341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.840348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.840355] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 [2024-12-15 06:26:40.852712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.853175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.853203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.853379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.853552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.853562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.853569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.853576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 Malloc0 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.865 [2024-12-15 06:26:40.865786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.865 [2024-12-15 06:26:40.866217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.865 [2024-12-15 06:26:40.866236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.865 [2024-12-15 06:26:40.866244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.865 [2024-12-15 06:26:40.866418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.865 [2024-12-15 06:26:40.866592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.865 [2024-12-15 06:26:40.866602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.865 [2024-12-15 06:26:40.866609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.865 [2024-12-15 06:26:40.866615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.865 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.866 [2024-12-15 06:26:40.878832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.866 [2024-12-15 06:26:40.879265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:20.866 [2024-12-15 06:26:40.879284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ba490 with addr=10.0.0.2, port=4420 00:36:20.866 [2024-12-15 06:26:40.879292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ba490 is same with the state(6) to be set 00:36:20.866 [2024-12-15 06:26:40.879466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba490 (9): Bad file descriptor 00:36:20.866 [2024-12-15 06:26:40.879647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:20.866 [2024-12-15 06:26:40.879657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:20.866 [2024-12-15 06:26:40.879664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:20.866 [2024-12-15 06:26:40.879671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:20.866 5036.17 IOPS, 19.67 MiB/s [2024-12-15T05:26:41.006Z] 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:20.866 [2024-12-15 06:26:40.887178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.866 [2024-12-15 06:26:40.891903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:20.866 06:26:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1202019 00:36:20.866 [2024-12-15 06:26:40.914147] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:23.183 5889.71 IOPS, 23.01 MiB/s [2024-12-15T05:26:43.891Z] 6572.88 IOPS, 25.68 MiB/s [2024-12-15T05:26:45.269Z] 7109.33 IOPS, 27.77 MiB/s [2024-12-15T05:26:46.205Z] 7537.90 IOPS, 29.44 MiB/s [2024-12-15T05:26:47.143Z] 7891.64 IOPS, 30.83 MiB/s [2024-12-15T05:26:48.081Z] 8184.25 IOPS, 31.97 MiB/s [2024-12-15T05:26:49.018Z] 8421.08 IOPS, 32.89 MiB/s [2024-12-15T05:26:49.956Z] 8646.14 IOPS, 33.77 MiB/s 00:36:29.816 Latency(us) 00:36:29.816 [2024-12-15T05:26:49.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.816 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:29.816 Verification LBA range: start 0x0 length 0x4000 00:36:29.816 Nvme1n1 : 15.00 8824.47 34.47 10816.74 0.00 6497.17 624.15 16227.96 00:36:29.816 [2024-12-15T05:26:49.956Z] =================================================================================================================== 00:36:29.816 [2024-12-15T05:26:49.956Z] Total : 8824.47 34.47 10816.74 0.00 6497.17 624.15 16227.96 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.075 rmmod nvme_tcp 00:36:30.075 rmmod nvme_fabrics 00:36:30.075 rmmod nvme_keyring 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1203008 ']' 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1203008 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1203008 ']' 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1203008 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.075 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1203008 00:36:30.076 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:30.076 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:30.076 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1203008' 00:36:30.076 killing process with pid 1203008 00:36:30.076 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1203008 00:36:30.076 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1203008 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.334 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.335 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.335 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.335 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.335 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:30.335 06:26:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:32.871 00:36:32.871 real 0m26.086s 00:36:32.871 user 1m0.824s 00:36:32.871 sys 0m6.725s 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:32.871 ************************************ 00:36:32.871 END TEST nvmf_bdevperf 00:36:32.871 ************************************ 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.871 ************************************ 00:36:32.871 START TEST nvmf_target_disconnect 00:36:32.871 ************************************ 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:32.871 * Looking for test storage... 00:36:32.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:32.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.871 --rc genhtml_branch_coverage=1 00:36:32.871 --rc genhtml_function_coverage=1 00:36:32.871 --rc genhtml_legend=1 00:36:32.871 --rc geninfo_all_blocks=1 00:36:32.871 --rc geninfo_unexecuted_blocks=1 00:36:32.871 00:36:32.871 ' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:32.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.871 --rc genhtml_branch_coverage=1 00:36:32.871 --rc genhtml_function_coverage=1 00:36:32.871 --rc genhtml_legend=1 00:36:32.871 --rc geninfo_all_blocks=1 00:36:32.871 --rc geninfo_unexecuted_blocks=1 00:36:32.871 00:36:32.871 ' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:32.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.871 --rc genhtml_branch_coverage=1 00:36:32.871 --rc genhtml_function_coverage=1 00:36:32.871 --rc genhtml_legend=1 00:36:32.871 --rc geninfo_all_blocks=1 00:36:32.871 --rc geninfo_unexecuted_blocks=1 00:36:32.871 00:36:32.871 ' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:32.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.871 --rc genhtml_branch_coverage=1 00:36:32.871 --rc genhtml_function_coverage=1 00:36:32.871 --rc genhtml_legend=1 00:36:32.871 --rc geninfo_all_blocks=1 00:36:32.871 --rc geninfo_unexecuted_blocks=1 00:36:32.871 00:36:32.871 ' 00:36:32.871 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:32.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:32.872 06:26:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:39.491 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.491 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:39.492 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:39.492 Found net devices under 0000:af:00.0: cvl_0_0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:39.492 Found net devices under 0000:af:00.1: cvl_0_1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:36:39.492 00:36:39.492 --- 10.0.0.2 ping statistics --- 00:36:39.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.492 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:36:39.492 00:36:39.492 --- 10.0.0.1 ping statistics --- 00:36:39.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.492 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.492 ************************************ 00:36:39.492 START TEST nvmf_target_disconnect_tc1 00:36:39.492 ************************************ 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:39.492 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.492 [2024-12-15 06:26:58.742981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.492 [2024-12-15 06:26:58.743040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x137ec50 with addr=10.0.0.2, port=4420 00:36:39.492 [2024-12-15 06:26:58.743065] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:39.492 [2024-12-15 06:26:58.743077] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:39.492 [2024-12-15 06:26:58.743084] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:39.492 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:39.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:39.493 Initializing NVMe Controllers 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:39.493 00:36:39.493 real 0m0.105s 00:36:39.493 user 0m0.046s 00:36:39.493 sys 0m0.059s 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 ************************************ 00:36:39.493 END TEST nvmf_target_disconnect_tc1 00:36:39.493 ************************************ 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 ************************************ 00:36:39.493 START TEST nvmf_target_disconnect_tc2 00:36:39.493 ************************************ 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1207928 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1207928 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1207928 ']' 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.493 06:26:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 [2024-12-15 06:26:58.891724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:39.493 [2024-12-15 06:26:58.891771] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.493 [2024-12-15 06:26:58.972861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:39.493 [2024-12-15 06:26:58.995926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.493 [2024-12-15 06:26:58.995964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.493 [2024-12-15 06:26:58.995971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.493 [2024-12-15 06:26:58.995977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.493 [2024-12-15 06:26:58.995983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.493 [2024-12-15 06:26:58.997492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:39.493 [2024-12-15 06:26:58.997536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:39.493 [2024-12-15 06:26:58.997648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:39.493 [2024-12-15 06:26:58.997649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 Malloc0 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 [2024-12-15 06:26:59.160296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 [2024-12-15 06:26:59.189353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1208103 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:39.493 06:26:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:41.487 06:27:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1207928 00:36:41.487 06:27:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 [2024-12-15 06:27:01.221301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Write completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.487 Read completed with error (sct=0, sc=8) 00:36:41.487 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 [2024-12-15 06:27:01.221512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Read completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 Write completed with error (sct=0, sc=8) 00:36:41.488 starting I/O failed 00:36:41.488 [2024-12-15 06:27:01.221720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:41.488 [2024-12-15 06:27:01.221909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.221933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.222147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.222159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.222381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.222394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.222604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.222616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.222820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.222832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.222975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.222988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.223926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.223960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.224122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.224171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.224347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.224381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.224617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.224861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.224895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.225138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.225174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.225348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.225381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.225587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.488 [2024-12-15 06:27:01.225621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.488 qpair failed and we were unable to recover it. 00:36:41.488 [2024-12-15 06:27:01.225802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.225836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.226142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.226178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.226447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.226480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.226625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.226659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.226975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.227170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.227198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.227326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.227352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.227534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.227560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.227816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.228091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.228119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.228278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.228304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.228464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.228491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.228730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.228758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.228872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.228899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.229008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.229036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.229216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.229242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.229515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.229830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.229856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.230055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.230083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.230271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.230305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.230421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.230455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.230735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.230768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.231016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.231052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.231228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.231261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.231479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.231512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.231778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.231812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.232027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.232063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.232282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.232310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.232549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.232576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.232755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.232782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.233018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.233047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.233276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.233320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.233535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.233568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.233740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.233780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.234024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.234059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.234254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.234288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.234499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.234533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.234742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.489 [2024-12-15 06:27:01.234776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.489 qpair failed and we were unable to recover it. 00:36:41.489 [2024-12-15 06:27:01.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.235104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.235367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.235402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.235715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.235749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.235863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.235896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.236100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.236260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.236294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.236436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.236470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.236748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.236781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.236969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.237011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.237238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.237271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.237377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.237410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.237761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.238014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.238050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.238183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.238216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.238365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.238399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.238590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.238624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.238819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.238851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.239088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.239123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.239320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.239354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.239554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.239588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.239809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.239843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.240026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.240062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.240265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.240300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.240539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.240574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.240783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.240816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.241079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.241115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.241254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.241287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.241497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.241531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.241722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.241756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.242017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.242054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.242240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.242274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.242462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.242496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.242778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.242813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.243026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.243061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.243203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.243237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.243477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.243517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.243794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.243829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.244033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.490 [2024-12-15 06:27:01.244201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.490 [2024-12-15 06:27:01.244236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.490 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.244423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.244458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.244599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.244633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.244838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.244872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.245092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.245373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.245406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.245700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.245734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.245948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.245982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.246208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.246242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.246352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.246386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.246592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.246625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.246935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.247156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.247192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.247457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.247491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.247741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.247775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.248024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.248059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.248241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.248275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.248493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.248527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.248817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.248852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.249090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.249126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.249390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.249425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.249736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.249770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.250042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.250079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.250321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.250354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.250601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.250636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.250836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.250871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.251049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.251084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.251266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.251301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.251543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.251577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.251775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.251809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.251990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.252051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.252181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.252214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.252400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.252434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.252689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.252726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.252919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.252953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.253119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.253154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.253368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.253403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.253599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.253641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.253874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.491 [2024-12-15 06:27:01.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.491 qpair failed and we were unable to recover it. 00:36:41.491 [2024-12-15 06:27:01.254131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.254167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.254350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.254385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.254641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.254678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.254808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.254842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.255031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.255068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.255338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.255372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.255510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.255545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.256936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.257007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.257234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.257271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.257454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.257488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.257755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.257790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.258035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.258071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.258289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.258323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.258517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.258550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.258767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.258800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.259000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.259036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.259230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.259264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.259399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.259432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.259566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.259600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.259896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.260180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.260216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.260407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.260444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.260701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.260735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.261039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.261074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.261201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.261234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.261427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.261462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.261760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.262054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.262089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.262317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.262502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.262536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.262653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.262686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.262955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.262989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.492 [2024-12-15 06:27:01.263275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.492 [2024-12-15 06:27:01.263308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.492 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.263576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.263610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.263886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.263919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.264165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.264201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.264489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.264522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.264726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.264758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.265036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.265077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.265208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.265241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.265479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.265512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.265818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.265852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.266047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.266082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.266329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.266362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.266591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.267028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.267155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.267188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.267449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.267482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.267622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.267656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.267922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.267956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.268268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.268302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.268442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.268476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.268742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.268776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.268918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.268952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.269153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.269188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.269376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.269411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.269551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.269585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.269839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.269873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.270158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.270290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.270324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.270518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.270551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.270661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.270695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.270962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.271008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 [2024-12-15 06:27:01.271193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.493 [2024-12-15 06:27:01.271226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.493 qpair failed and we were unable to recover it. 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Write completed with error (sct=0, sc=8) 00:36:41.493 starting I/O failed 00:36:41.493 Read completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Read completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Read completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 Write completed with error (sct=0, sc=8) 00:36:41.494 starting I/O failed 00:36:41.494 [2024-12-15 06:27:01.271878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:41.494 [2024-12-15 06:27:01.272171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.272231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.272433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.272469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.272735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.272769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.272902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.272937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.273159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.273194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.273372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.273406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.273604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.273647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.273837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.273870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.274015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.274050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.274292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.274326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.274529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.274563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.274739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.274773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.274936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.275148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.275185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.275366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.275400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.275575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.275609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.275851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.275885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.276013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.276049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.276228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.276263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.276391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.276442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.276632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.276666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.276842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.276876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.277055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.277090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.277212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.277433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.277468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.277647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.277681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.277903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.278090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.278126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.278371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.278406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.278579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.278612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.278775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.278956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.278990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.279244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.279278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.279465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.279551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.494 [2024-12-15 06:27:01.279875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.494 [2024-12-15 06:27:01.279914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.494 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.280041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.280078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.280279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.280312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.280568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.280603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.280777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.280813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.281018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.281053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.281231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.281266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.281410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.281444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.281622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.281655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.281862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.281896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.282171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.282206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.282389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.282423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.282681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.282724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.282863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.282897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.283083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.283246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.283280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.283481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.283514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.283780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.283814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.284050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.284241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.284524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.284557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.284693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.284727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.284920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.284953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.285235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.285269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.285437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.285472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.285646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.285680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.285817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.285852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.286147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.286182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.286386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.286420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.286613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.286647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.286922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.286956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.287173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.287209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.287340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.287374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.287623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.287656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.287876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.288058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.288094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.288271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.288305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.288500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.288534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.288650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.288684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.495 [2024-12-15 06:27:01.288824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.495 [2024-12-15 06:27:01.288859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.495 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.288983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.289037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.289176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.289211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.289418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.289452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.289748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.289782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.289905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.289941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.290150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.290301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.290336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.290580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.290615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.290784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.290818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.291002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.291039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.291225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.291260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.291389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.291424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.291547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.291582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.291885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.291920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.292108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.292144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.292351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.292502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.292536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.292717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.292750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.293014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.293299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.293334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.293587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.293756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.293791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.294908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.294942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.295140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.295176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.295300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.295335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.295515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.295549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.295745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.295780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.295887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.295920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.496 [2024-12-15 06:27:01.296181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.496 [2024-12-15 06:27:01.296217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.496 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.296392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.296598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.296631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.296810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.296843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.297035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.297070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.297336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.297371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.297575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.297614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.297803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.297837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.298064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.298330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.298364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.298583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.298617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.298892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.298925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.299123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.299157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.299405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.299629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.299663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.299849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.299884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.300064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.300100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.300231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.300265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.300457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.300492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.300681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.300715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.301018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.301053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.301183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.301217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.301420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.301455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.301692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.301725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.301838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.301872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.302059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.302095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.302275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.302308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.302478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.302648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.302682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.302927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.302960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.303155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.303190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.303311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.303499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.303534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.303713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.303749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.303874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.303907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.304106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.304352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.304386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.304573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.304607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.304866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.304900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.497 qpair failed and we were unable to recover it. 00:36:41.497 [2024-12-15 06:27:01.305024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.497 [2024-12-15 06:27:01.305060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.305240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.305273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.305481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.305622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.305656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.305854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.305888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.306113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.306285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.306449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.306599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.306817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.306960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.307190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.307346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.307493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.307719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.307941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.307974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.308099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.308261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.308293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.308411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.308445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.308556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.308590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.308791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.308823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.309049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.309225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.309387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.309543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.309709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.310142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.310315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.310523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.310682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.310890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.310924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.311169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.311205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.311334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.311369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.311614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.311648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.311848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.311883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.312034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.312070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.312328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.312363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.312621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.498 [2024-12-15 06:27:01.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.498 qpair failed and we were unable to recover it. 00:36:41.498 [2024-12-15 06:27:01.312776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.312810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.312944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.312977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.313232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.313268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.313467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.313500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.313689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.313723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.313907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.313941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.314140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.314176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.314368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.314401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.314580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.314619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.314732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.314766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.314870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.314904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.315012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.315048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.315177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.315343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.315376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.315493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.315527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.315770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.315804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.316854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.316887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.317050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.317188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.317222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.317417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.317451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.317650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.317684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.317872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.317906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.318083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.318119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.318362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.318395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.318502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.318534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.318641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.318673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.318968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.319174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.319209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.319404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.319437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.319782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.320172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.320367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.320402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.320635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.320669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.499 qpair failed and we were unable to recover it. 00:36:41.499 [2024-12-15 06:27:01.320842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.499 [2024-12-15 06:27:01.320876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.321173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.321364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.321399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.321619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.321881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.321915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.322103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.322138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.322285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.322319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.322497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.322530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.322710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.322743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.322930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.322964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.323164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.323419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.323697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.323731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.323970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.324014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.324158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.324193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.324376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.324410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.324673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.324707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.324836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.324870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.325060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.325096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.325296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.325330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.325515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.325549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.325732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.325766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.325978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.326223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.326258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.326443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.326563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.326596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.326845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.326879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.327121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.327156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.327271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.327305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.327493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.327527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.327702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.327736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.327947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.327981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.328132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.328167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.328430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.328465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.328593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.328627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.328818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.328852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.329106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.329142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.329278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.329312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.329491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.500 [2024-12-15 06:27:01.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.500 qpair failed and we were unable to recover it. 00:36:41.500 [2024-12-15 06:27:01.329768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.329974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.330127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.330281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.330450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.330732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.330949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.330982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.331118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.331152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.331335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.331368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.331563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.331596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.331771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.331804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.332046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.332086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.332222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.332256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.332453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.332486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.332699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.332733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.332847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.332880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.333014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.333049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.333230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.333264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.333377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.333410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.333669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.333703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.333880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.333915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.334106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.334142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.334333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.334366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.334567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.334602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.334792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.334827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.335046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.335216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.335375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.335523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.335833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.335974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.336030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.336204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.336239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.336427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.501 [2024-12-15 06:27:01.336462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.501 qpair failed and we were unable to recover it. 00:36:41.501 [2024-12-15 06:27:01.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.336677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.336858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.336893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.337084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.337120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.337252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.337286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.337406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.337742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.337954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.337988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.338129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.338162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.338425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.338459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.338701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.338913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.338946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.339144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.339179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.339390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.339425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.339543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.339577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.339762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.339796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.339979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.340035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.340209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.340244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.340363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.340637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.340677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.340855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.340890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.341055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.341249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.341282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.341525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.341558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.341795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.341829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.342010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.342044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.342284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.342320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.342511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.342544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.342749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.342784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.342967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.343009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.343288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.343408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.343442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.343628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.343669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.343798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.343831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.344917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.502 [2024-12-15 06:27:01.344951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.502 qpair failed and we were unable to recover it. 00:36:41.502 [2024-12-15 06:27:01.345080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.345302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.345337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.345499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.345706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.345740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.345942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.345977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.346209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.346243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.346425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.346460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.346705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.346738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.347014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.347048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.347314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.347347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.347537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.347572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.347705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.347738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.347935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.347968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.348123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.348158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.348342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.348374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.348575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.348795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.348828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.349076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.349110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.349377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.349410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.349597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.349636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.349818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.350035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.350070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.350172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.350204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.350442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.350475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.350665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.350698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.350876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.350910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.351048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.351083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.351205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.351239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.351499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.351532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.351681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.351894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.351927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.352046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.352078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.352280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.352313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.352492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.352527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.352696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.352729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.352846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.352878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.353156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.353190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.353370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.353403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.503 [2024-12-15 06:27:01.353575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.503 [2024-12-15 06:27:01.353608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.503 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.353848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.353880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.354009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.354150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.354182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.354373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.354590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.354763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.354795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.355064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.355098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.355221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.355255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.355470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.355502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.355623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.355656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.355893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.355926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.356097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.356131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.356259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.356291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.356510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.356543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.356795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.356828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.357049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.357206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.357481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.357641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.357786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.357959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.358004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.358133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.358166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.358347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.358380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.358619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.358653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.358910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.358943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.359196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.359229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.359437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.359470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.359646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.359679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.359853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.359886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.360148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.360182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.360373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.360405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.360615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.360649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.360841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.360875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.361117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.361295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.361328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.361499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.361532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.361745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.361779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.362019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.504 [2024-12-15 06:27:01.362054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.504 qpair failed and we were unable to recover it. 00:36:41.504 [2024-12-15 06:27:01.362169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.362482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.362740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.362773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.363017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.363052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.363259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.363292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.363527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.363561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.363749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.363907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.363939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.364088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.364122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.364383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.364416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.364596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.364629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.364894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.364927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.365048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.365082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.365200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.365233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.365472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.365505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.365747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.365780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.365897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.365929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.366112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.366146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.366331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.366364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.366542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.366575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.366760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.366793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.367013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.367209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.367248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.367516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.367550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.367809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.367842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.368027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.368062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.368187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.368220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.368354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.368387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.368643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.368676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.368804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.368837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.369016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.369050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.369172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.369206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.369407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.369441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.369699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.369732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.505 qpair failed and we were unable to recover it. 00:36:41.505 [2024-12-15 06:27:01.369906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.505 [2024-12-15 06:27:01.369939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.370192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.370493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.370527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.370778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.370811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.371117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.371151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.371336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.371369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.371568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.371602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.371773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.371806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.371988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.372040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.372250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.372284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.372416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.372449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.372571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.372604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.372868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.372900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.373163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.373414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.373447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.373737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.373771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.373940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.374231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.374264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.374459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.374493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.374743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.374777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.374958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.374998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.375188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.375221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.375406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.375439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.375580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.375614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.375792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.375825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.375939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.375972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.376171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.376205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.376419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.376452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.376662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.376701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.376829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.376863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.377064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.377099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.377282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.377315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.377492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.377525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.377719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.377752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.377898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.377930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.378124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.378158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.378446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.378479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.378742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.378775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.378930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.506 [2024-12-15 06:27:01.378966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.506 qpair failed and we were unable to recover it. 00:36:41.506 [2024-12-15 06:27:01.379157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.379192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.379386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.379419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.379609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.379834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.379867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.380061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.380096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.380342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.380375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.380579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.380833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.380866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.381049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.381082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.381271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.381303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.381449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.381482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.381739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.382034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.382068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.382203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.382235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.382355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.382388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.382582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.382614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.382820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.382854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.383135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.383171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.383315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.383348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.383465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.383498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.383794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.383828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.383950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.383983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.384167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.384202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.384388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.384422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.384551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.384584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.384758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.384940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.385140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.385175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.385418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.385451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.385716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.385754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.385957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.385990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.386208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.386243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.386365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.386398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.386542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.386575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.386865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.387136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.387170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.387455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.387488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.387760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.387793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.387979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.507 [2024-12-15 06:27:01.388021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.507 qpair failed and we were unable to recover it. 00:36:41.507 [2024-12-15 06:27:01.388243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.388276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.388518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.388550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.388678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.388712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.388898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.388931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.389091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.389126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.389317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.389351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.389627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.389661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.389794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.389827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.390068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.390103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.390240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.390274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.390466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.390499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.390623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.390657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.390794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.390828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.391155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.391461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.391590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.391846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.391880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.392102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.392137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.392268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.392566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.392600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.392789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.392823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.392955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.392988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.393174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.393207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.393397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.393430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.393571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.393605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.393804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.393838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.394076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.394112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.394354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.394387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.394654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.394687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.394825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.394864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.395109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.395144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.395363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.395396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.395673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.396027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.396206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.396240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.396505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.396787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.396821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.508 [2024-12-15 06:27:01.396946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.508 [2024-12-15 06:27:01.396979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.508 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.397212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.397481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.397514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.397633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.397667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.397933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.397966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.398189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.398424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.398458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.398664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.398697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.398938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.398971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.399163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.399198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.399393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.399426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.399555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.399588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.399703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.399735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.399923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.399957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.400152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.400186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.400313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.400346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.400537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.400834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.401099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.401134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.401320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.401355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.401531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.401564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.401833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.401866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.402005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.402038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.402330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.402364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.402683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.402717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.402960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.403002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.403266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.403316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.403430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.403463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.403716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.403749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.403928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.403961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.404190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.404225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.404525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.404808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.404841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.405117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.405154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.405368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.405402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.405653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.405687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.405798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.405832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.406129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.406164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.406373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.509 [2024-12-15 06:27:01.406670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.509 [2024-12-15 06:27:01.406704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.509 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.406896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.406930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.407086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.407121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.407332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.407367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.407682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.407716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.407981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.408023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.408227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.408261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.408545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.408580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.408764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.408990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.409032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.409207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.409241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.409436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.409471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.409657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.409692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.409903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.409937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.410248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.410285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.410556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.410589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.410716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.410749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.411056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.411189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.411223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.411414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.411449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.411681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.411920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.411955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.412090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.412125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.412268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.412303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.412486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.412521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.412639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.412673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.412859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.412894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.413090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.413126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.413244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.413278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.413560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.413594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.413840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.413874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.414068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.414105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.414292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.414325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.414512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.510 [2024-12-15 06:27:01.414546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.510 qpair failed and we were unable to recover it. 00:36:41.510 [2024-12-15 06:27:01.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.414711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.414830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.414863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.415078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.415114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.415233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.415265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.415394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.415601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.415775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.415809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.416004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.416040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.416216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.416249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.416399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.416511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.416545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.416788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.416823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.417083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.417119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.417339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.417374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.417617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.417840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.417874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.418049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.418084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.418208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.418242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.418467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.418659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.418694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.418820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.418855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.419044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.419079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.419198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.419231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.419410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.419444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.419563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.419597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.419793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.419826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.420039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.420079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.420266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.420299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.420497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.420530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.420703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.420737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.420985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.421028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.421281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.421315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.421496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.421529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.421745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.421778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.421985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.422046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.422254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.422288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.422474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.422506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.422775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.422809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.422943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.511 [2024-12-15 06:27:01.422976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.511 qpair failed and we were unable to recover it. 00:36:41.511 [2024-12-15 06:27:01.423180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.423214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.423492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.423525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.423729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.423761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.423938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.423971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.424164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.424197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.424448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.424481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.424699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.424732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.424907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.424940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.425147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.425182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.425427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.425461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.425612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.425800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.425834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.426142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.426177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.426450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.426483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.426690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.426723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.426963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.427130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.427364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.427506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.427728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.427947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.427979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.428168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.428202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.428327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.428360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.428535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.428568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.428685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.428718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.428841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.428874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.429052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.429087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.429313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.429438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.429472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.429616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.429649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.429831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.429864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.430072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.430107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.430396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.430430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.430576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.430694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.430728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.430858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.430892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.431102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.431137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.431328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.431362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.512 [2024-12-15 06:27:01.431668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.512 [2024-12-15 06:27:01.431702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.512 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.431886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.431920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.432178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.432213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.432366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.432400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.432642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.432916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.432951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.433248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.433283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.433548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.433802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.433835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.434132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.434168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.434424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.434457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.434645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.434678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.434857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.434891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.435142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.435176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.435442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.435476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.435664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.435698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.435947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.435981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.436181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.436215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.436456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.436489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.436777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.436811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.437074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.437109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.437400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.437433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.437697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.437731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.437979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.438042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.438320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.438354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.438636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.438670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.438947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.438982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.439261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.439295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.439568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.439602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.439719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.439759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.440007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.440040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.440282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.440316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.440614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.440817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.440850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.441116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.441372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.441635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.441668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.441789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.441822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.442007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.442042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.513 [2024-12-15 06:27:01.442227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.513 [2024-12-15 06:27:01.442261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.513 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.442504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.442538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.442807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.442964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.443005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.443155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.443189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.443362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.443396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.443621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.443655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.443835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.443868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.444055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.444089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.444329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.444363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.444549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.444583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.444859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.444891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.445179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.445215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.445480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.445513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.445802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.445835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.446030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.446065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.446332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.446366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.446670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.446704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.446877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.446911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.447155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.447190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.447458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.447491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.447692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.447726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.447909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.447942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.448254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.448289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.448462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.448496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.448764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.448798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.449069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.449104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.449388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.449422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.449694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.449727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.450030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.450066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.450318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.450577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.450611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.450817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.450850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.451113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.451147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.451334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.514 [2024-12-15 06:27:01.451367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.514 qpair failed and we were unable to recover it. 00:36:41.514 [2024-12-15 06:27:01.451632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.451665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.451951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.452256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.452289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.452409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.452442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.452636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.452669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.452928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.453192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.453495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.453527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.453723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.453757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.453954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.453988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.454192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.454225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.454491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.454523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.454799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.454833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.454974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.455014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.455251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.455285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.455483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.455517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.455807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.455841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.456085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.456121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.456376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.456411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.456551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.456585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.456765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.456799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.457044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.457079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.457262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.457295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.457491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.457525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.457770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.458063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.458100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.458289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.458323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.458458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.458492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.458784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.458817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.459056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.459090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.459220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.459255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.459389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.459424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.459706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.459739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.459935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.460197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.460231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.460455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.460657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.460690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.460866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.460899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.515 [2024-12-15 06:27:01.461028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.515 [2024-12-15 06:27:01.461063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.515 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.461330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.461363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.461607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.461642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.461760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.462046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.462182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.462215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.462488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.462521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.462734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.462769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.463058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.463093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.463294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.463329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.463477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.463511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.463738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.463771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.463967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.464008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.464255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.464288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.464469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.464502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.464782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.464815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.464958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.464997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.465140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.465174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.465422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.465455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.465665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.465699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.465944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.465978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.466133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.466167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.466411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.466445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.466761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.466973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.467017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.467240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.467276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.467456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.467489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.467756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.467789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.467972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.468016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.468226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.468416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.468450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.468600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.468633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.468842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.468875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.469102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.469137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.469276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.469310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.469502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.469536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.469796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.469831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.470127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.470168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.470307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.470341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.516 qpair failed and we were unable to recover it. 00:36:41.516 [2024-12-15 06:27:01.470449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.516 [2024-12-15 06:27:01.470483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.470670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.470704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.470895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.470928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.471078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.471114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.471331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.471365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.471658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.471693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.471877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.471911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.472104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.472140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.472271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.472305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.472556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.472591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.472809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.472845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.473076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.473314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.473348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.473682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.473718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.474029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.474066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.474264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.474298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.474460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.474665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.474699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.474971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.475016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.475226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.475262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.475412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.475448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.475585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.475619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.475815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.475850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.476091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.476329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.476526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97c70 is same with the state(6) to be set 00:36:41.517 [2024-12-15 06:27:01.476889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.476965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.477289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.477329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.477545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.477582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.477852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.477886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.478131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.478168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.478296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.478332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.478485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.478646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.478681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.478925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.478961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.479179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.479214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.479359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.479393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.479690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.479727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.479920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.479955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.517 [2024-12-15 06:27:01.480169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.517 [2024-12-15 06:27:01.480205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.517 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.480471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.480507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.480747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.480781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.481049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.481084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.481227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.481263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.481533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.481568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.481750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.481785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.482042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.482281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.482316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.482567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.482601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.482789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.482823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.483029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.483066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.483192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.483228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.483370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.483423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.483609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.483854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.483889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.484069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.484105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.484238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.484272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.484465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.484501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.484773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.484808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.485041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.485077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.485359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.485396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.485612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.485785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.485821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.486069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.486106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.486377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.486412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.486601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.486636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.486846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.486883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.487080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.487117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.487319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.487355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.487558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.487594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.487846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.487881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.488084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.488121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.488343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.488378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.488494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.488755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.488790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.488969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.489013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.489205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.489242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.489420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.489456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.518 [2024-12-15 06:27:01.489651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.518 [2024-12-15 06:27:01.489686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.518 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.489945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.489982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.490193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.490229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.490370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.490680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.490717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.490845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.490880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.491130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.491415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.491450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.491710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.491899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.491935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.492167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.492348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.492383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.492587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.492623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.492892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.492934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.493170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.493311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.493346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.493560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.493595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.493711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.493746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.493936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.493970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.494091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.494127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.494336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.494621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.494657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.494834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.494869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.494989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.495038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.495220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.495262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.495389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.495424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.495670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.495706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.495889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.495925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.496152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.496188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.496319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.496357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.496552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.496588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.496776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.496812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.496933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.496969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.497114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.497148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.519 qpair failed and we were unable to recover it. 00:36:41.519 [2024-12-15 06:27:01.497327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.519 [2024-12-15 06:27:01.497362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.497593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.497631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.497821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.497857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.498067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.498104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.498241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.498276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.498403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.498437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.498692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.498730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.498990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.499064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.499269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.499305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.499627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.499663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.499869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.499905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.500085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.500122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.500326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.500361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.500540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.500575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.500719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.500754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.500947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.500981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.501181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.501216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.501451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.501487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.501672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.501707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.501839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.501874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.502102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.502146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.502423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.502458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.502732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.502767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.503016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.503056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.503188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.503222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.503421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.503455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.503601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.503636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.503831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.503865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.504114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.504150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.504334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.504368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.504578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.504614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.504824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.504859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.504986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.505030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.505158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.505194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.505471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.505506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.505766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.505800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.506045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.506081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.506262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.506296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.520 [2024-12-15 06:27:01.506469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.520 [2024-12-15 06:27:01.506503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.520 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.506614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.506648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.506829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.506862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.507061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.507096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.507221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.507256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.507380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.507415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.507689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.507723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.507848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.507882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.508139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.508325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.508359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.508637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.508673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.508811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.509032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.509068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.509287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.509321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.509582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.509617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.509750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.509786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.509915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.509950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.510071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.510108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.510288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.510483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.510517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.510655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.510692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.510878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.510914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.511060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.511103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.511302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.511336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.511548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.511584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.511776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.511811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.512131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.512377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.512411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.512534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.512567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.512702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.512741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.512860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.512895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.513931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.514156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.514192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.514303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.514338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.521 qpair failed and we were unable to recover it. 00:36:41.521 [2024-12-15 06:27:01.514540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.521 [2024-12-15 06:27:01.514574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.514765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.514799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.514989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.515056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.515255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.515290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.515487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.515521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.515777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.515812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.515957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.516003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.516214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.516250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.516473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.516508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.516702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.516736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.516853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.516888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.517135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.517171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.517406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.517592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.517626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.517847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.518077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.518112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.518328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.518365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.518484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.518519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.518641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.518677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.518805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.518840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.519027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.519063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.519346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.519382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.519602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.519637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.519822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.519862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.520889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.520923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.521061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.521098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.521312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.521346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.521573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.521706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.521740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.521866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.522145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.522179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.522382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.522416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.522558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.522594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.522879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.522 [2024-12-15 06:27:01.522913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.522 qpair failed and we were unable to recover it. 00:36:41.522 [2024-12-15 06:27:01.523042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.523078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.523215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.523249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.523519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.523823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.523860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.524132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.524168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.524377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.524412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.524704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.525011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.525048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.525242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.525277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.525475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.525511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.525694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.525729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.526010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.526046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.526250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.526284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.526527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.526753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.526789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.526988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.527035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.527216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.527254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.527438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.527472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.527676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.527710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.527922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.527957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.528175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.528211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.528360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.528394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.528590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.528625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.528763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.528798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.528983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.529039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.529186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.529220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.529492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.529527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.529718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.529753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.529933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.530065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.530101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.530226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.530260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.530374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.530408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.530604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.530639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.530777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.523 [2024-12-15 06:27:01.530814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.523 qpair failed and we were unable to recover it. 00:36:41.523 [2024-12-15 06:27:01.531020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.531057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.531306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.531340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.531477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.531511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.531743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.531931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.531968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.532183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.532221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.532353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.532387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.532581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.532620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.532809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.532846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.533061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.533100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.533250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.533284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.533510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.533545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.533681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.533717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.533981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.534027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.534280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.534316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.534556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.534591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.534858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.534893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.535124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.535162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.535345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.535381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.535516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.535552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.535778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.535813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.536005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.536043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.536317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.536355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.536653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.536689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.536908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.536944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.537196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.537233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.537422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.537458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.537708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.537744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.537930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.537966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.538164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.538201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.538481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.538524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.538824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.538860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.539046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.539082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.539276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.539310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.539504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.539540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.539721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.540033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.540071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.540317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.540352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.540531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.524 [2024-12-15 06:27:01.540566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.524 qpair failed and we were unable to recover it. 00:36:41.524 [2024-12-15 06:27:01.540805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.540841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.541038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.541074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.541204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.541238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.541377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.541413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.541598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.541639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.541866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.541902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.542101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.542137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.542326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.542361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.542494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.542803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.542838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.543037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.543075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.543272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.543307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.543430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.543465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.543676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.543713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.543911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.543946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.544221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.544408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.544444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.544631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.544668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.544923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.544970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.545117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.545152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.545332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.545369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.545557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.545591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.545805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.545840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.546039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.546076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.546196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.546237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.546499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.546807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.546850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.547124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.547161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.547382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.547419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.547620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.547656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.547840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.547876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.547986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.548073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.548274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.548310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.548526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.548561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.548779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.548814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.549061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.549099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.549311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.549345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.549594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.549628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.525 [2024-12-15 06:27:01.549821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.525 [2024-12-15 06:27:01.549856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.525 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.550047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.550086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.550355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.550390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.550590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.550626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.550811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.550848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.551057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.551094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.551336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.551371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.551634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.551808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.551843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.552030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.552067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.552317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.552352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.552600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.552636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.552835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.552870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.553068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.553256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.553292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.553637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.553672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.553953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.553989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.554440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.554476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.554747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.554957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.555051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.555325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.555365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.555575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.555614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.555728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.555763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.555908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.555944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.556150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.556187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.556328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.556364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.556551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.556586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.556724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.556758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.556958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.557005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.557191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.557422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.557456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.557581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.557616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.557804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.557847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.557963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.558012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.558227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.558261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.558453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.558487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.558596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.558813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.558848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.526 [2024-12-15 06:27:01.559072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.526 [2024-12-15 06:27:01.559109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.526 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.559359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.559395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.559574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.559609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.559794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.559829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.560034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.560071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.560212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.560382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.560416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.560618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.560653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.560911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.560947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.561076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.561112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.561269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.561304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.561552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.561587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.561846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.561881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.562154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.562191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.562386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.562420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.562616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.562651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.562848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.562883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.563018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.563055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.563274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.563495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.563529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.565039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.565099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.565341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.565375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.565633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.565670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.565873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.565909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.566187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.566223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.566430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.566465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.566586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.566631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.566780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.566815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.567065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.567103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.567284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.567320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.567440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.567691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.567727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.567897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.568143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.568179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.568328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.568369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.568499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.568533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.568766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.568802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.568937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.568972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.527 [2024-12-15 06:27:01.569189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.527 [2024-12-15 06:27:01.569223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.527 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.569413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.569447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.569591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.569626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.569817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.569851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.569979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.570172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.570327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.570490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.570729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.570949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.570983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.571190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.571224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.571533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.571659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.571693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.571825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.571862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.572952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.572987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.573113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.573148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.573327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.573362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.573610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.573645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.573814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.574028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.574066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.574329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.574364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.574477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.574510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.574658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.574851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.574887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.575149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.575185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.575624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.575659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.575853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.575888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.576066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.576102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.576234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.576268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.528 [2024-12-15 06:27:01.576383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.528 [2024-12-15 06:27:01.576417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.528 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.576535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.576576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.576760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.576795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.576977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.577182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.577363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.577409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.577653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.577687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.577793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.577828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.578024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.578059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.578176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.578210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.578416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.578449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.578637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.578672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.578883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.578917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.579105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.579140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.579281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.579440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.579472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.579684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.579718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.579833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.579867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.580006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.580042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.580221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.580255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.580394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.580428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.580626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.580661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.580770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.580803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.581015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.581050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.581176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.581210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.581475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.581643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.581678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.581870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.582927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.582961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.583147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.583549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.583772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.583883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.583918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.584172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.529 qpair failed and we were unable to recover it. 00:36:41.529 [2024-12-15 06:27:01.584365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.529 [2024-12-15 06:27:01.584400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.584533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.584569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.584683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.584727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.584906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.584940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.585063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.585099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.585348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.585383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.585625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.585659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.585869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.585904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.586130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.586356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.586514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.586669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.586828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.586957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.587000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.587366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.587400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.587587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.587620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.587818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.587852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.588957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.588999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.589179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.589215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.589331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.589367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.589544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.589579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.589766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.589800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.589988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.590030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.590205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.590246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.590427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.590461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.590706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.590744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.590929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.590963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.591159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.591194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.591314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.591349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.591457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.591491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.591619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.591653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.591834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.591869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.530 [2024-12-15 06:27:01.592005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.530 [2024-12-15 06:27:01.592040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.530 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.592227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.592262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.592453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.592488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.592606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.592640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.592837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.592871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.593876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.593910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.594021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.594056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.594296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.594331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.594509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.594544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.594721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.594755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.594946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.594981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.595120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.595155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.595306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.595493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.595529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.595690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.595888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.595922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.596190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.596226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.596346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.596379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.596567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.596601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.596833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.596956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.597192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.597362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.597517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.597745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.597936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.597971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.598953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.598987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.599123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.599159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.599302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.531 [2024-12-15 06:27:01.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.531 qpair failed and we were unable to recover it. 00:36:41.531 [2024-12-15 06:27:01.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.532 [2024-12-15 06:27:01.599482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.532 qpair failed and we were unable to recover it. 00:36:41.532 [2024-12-15 06:27:01.599659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.532 [2024-12-15 06:27:01.599693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.532 qpair failed and we were unable to recover it. 00:36:41.532 [2024-12-15 06:27:01.599890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.532 [2024-12-15 06:27:01.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.532 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.600107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.600142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.600400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.600435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.600558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.600594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.600727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.600762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.600947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.600982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.601108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.601143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.601328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.601365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.601476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.601510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.601616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.601651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.601839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.601873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.602016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.602051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.602183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.602217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.602415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.602449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.602712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.602746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.602865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.602900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.603032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.603077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.603234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.603268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.603441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.603474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.603595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.603630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.603902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.603935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.604125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.604337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.604372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.604483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.811 [2024-12-15 06:27:01.604517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.811 qpair failed and we were unable to recover it. 00:36:41.811 [2024-12-15 06:27:01.604632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.604667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.604859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.604893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.605146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.605181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.605304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.605338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.605528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.605562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.605757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.605791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.606092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.606307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.606341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.606517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.606551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.606824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.606860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.607139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.607176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.607303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.607338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.607579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.607614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.607857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.607892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.608121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.608157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.608427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.608461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.608669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.608704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.609047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.609203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.609533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.609808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.609985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.610031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.610323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.610359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.610482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.610517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.610763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.610798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.610982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.611047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.611262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.611297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.611521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.611635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.611670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.611920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.611954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.612094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.612131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.612323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.612357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.612540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.612575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.612716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.612751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.613012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.613046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.613325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.613546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.613582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.613834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.613869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.614068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.614105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.614334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.614510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.812 [2024-12-15 06:27:01.614545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.812 qpair failed and we were unable to recover it. 00:36:41.812 [2024-12-15 06:27:01.614738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.614772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.614949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.614983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.615280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.615316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.615516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.615550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.615746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.615788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.616000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.616036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.616289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.616323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.616447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.616482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.616750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.616784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.617050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.617086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.617281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.617316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.617557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.617591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.617852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.617887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.618110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.618146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.618279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.618314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.618562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.618596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.618791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.618827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.619041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.619076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.619316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.619351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.619475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.619510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.619704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.619739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.619865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.619900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.620165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.620202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.620391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.620427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.620607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.620643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.620839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.620875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.621070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.621107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.621324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.621360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.621542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.621845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.621880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.622151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.622187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.622460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.622495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.622701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.622735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.622945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.622981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.623231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.623502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.623537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.623661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.623694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.623894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.623928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.624144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.624181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.624373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.624406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.624528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.624562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.624742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.813 [2024-12-15 06:27:01.624777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.813 qpair failed and we were unable to recover it. 00:36:41.813 [2024-12-15 06:27:01.625023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.625274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.625308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.625497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.625538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.625779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.625923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.625958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.626160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.626196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.626327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.626363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.626570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.626605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.626743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.626778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.626892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.626925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.627143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.627179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.627433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.627467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.627709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.627744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.627988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.628041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.628218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.628252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.628372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.628405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.628538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.628573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.628813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.628847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.629060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.629248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.629281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.629387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.629422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.629611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.629645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.629844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.629877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.630090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.630125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.630371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.630407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.630587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.630621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.630818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.630854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.630986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.631032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.631218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.631524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.631560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.631739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.631773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.631956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.632002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.632226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.632261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.632400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.632435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.632629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.632663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.632906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.632941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.633191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.633228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.633408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.633662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.633697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.633824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.633859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.634059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.814 [2024-12-15 06:27:01.634095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.814 qpair failed and we were unable to recover it. 00:36:41.814 [2024-12-15 06:27:01.634220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.634254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.634516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.634556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.634870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.634906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.635042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.635077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.635183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.635218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.635344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.635378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.635623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.635658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.635852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.635887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.636133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.636168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.636347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.636381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.636568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.636603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.636812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.637163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.637199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.637325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.637360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.637484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.637519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.637638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.637673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.637962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.638006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.638196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.638231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.638498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.638537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.638808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.638842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.639054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.639089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.639512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.639546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.639719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.639753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.639928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.639962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.640092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.640127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.640384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.640418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.640662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.640696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.640883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.640917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.641945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.641978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.815 [2024-12-15 06:27:01.642221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.815 [2024-12-15 06:27:01.642296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.815 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.642564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.642737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.642772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.642963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.643010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.643187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.643221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.643396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.643622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.643666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.643891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.644105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.644142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.644268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.644301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.644414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.644448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.644690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.644723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.644970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.645011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.645168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.645404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.645670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.645705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.645884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.645917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.646160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.646194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.646408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.646441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.646649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.646848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.646882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.647125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.647160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.647334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.647368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.647492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.647525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.647696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.647730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.647860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.647892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.648097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.648133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.648245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.648279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.648460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.648493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.648611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.648645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.648755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.648788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.649054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.649090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.649303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.649335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.649584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.649618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.649861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.649895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.650081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.650115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.650292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.650326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.650524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.650556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.650680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.650715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.650902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.650935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.651145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.651180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.651354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.651388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.651573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.816 [2024-12-15 06:27:01.651605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.816 qpair failed and we were unable to recover it. 00:36:41.816 [2024-12-15 06:27:01.651846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.651880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.652075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.652110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.652296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.652329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.652635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.652685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.652811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.652844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.653043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.653078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.653254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.653287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.653396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.653430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.653555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.653588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.653871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.653905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.654947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.654981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.655115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.655149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.655400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.655434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.655673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.655706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.655892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.655925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.656189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.656224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.656465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.656499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.656742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.656775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.656901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.656934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.657132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.657167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.657428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.657461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.657723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.657756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.657926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.658121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.658155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.658398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.658432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.658737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.658943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.658981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.659203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.659428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.659462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.659662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.659696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.659870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.660005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.660041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.660224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.660257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.660500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.660534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.660721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.660755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.660940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.660973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.661171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.817 [2024-12-15 06:27:01.661204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.817 qpair failed and we were unable to recover it. 00:36:41.817 [2024-12-15 06:27:01.661460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.661494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.661725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.661769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.661947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.661980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.662238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.662273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.662488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.662522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.662657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.662690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.662956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.662989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.663204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.663238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.663376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.663408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.663535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.663568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.663756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.663790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.663978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.664023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.664197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.664435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.664468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.664576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.664611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.664792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.665028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.665273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.665307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.665459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.665491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.665708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.665741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.665853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.666083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.666117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.666301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.666335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.666699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.666895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.666929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.667155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.667191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.667313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.667347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.667548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.667582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.667826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.667861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.668074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.668109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.668303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.668337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.668551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.668760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.668793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.669079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.669116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.669316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.669350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.669599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.669634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.669870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.669904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.670079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.670114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.670313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.670359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.670502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.670789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.670822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.671107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.818 [2024-12-15 06:27:01.671148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.818 qpair failed and we were unable to recover it. 00:36:41.818 [2024-12-15 06:27:01.671285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.671319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.671535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.671567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.671837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.671869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.672316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.672349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.672534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.672777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.672813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.673030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.673066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.673283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.673316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.673432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.673466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.673785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.673819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.674067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.674354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.674388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.674636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.674670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.674854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.675039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.675311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.675345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.675496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.675529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.675724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.675758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.676069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.676106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.676377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.676412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.676631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.676664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.676919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.677163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.677199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.677382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.677415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.677628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.677892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.677967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.678239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.678312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.678619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.678657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.678900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.678936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.679105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.679141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.679340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.679375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.679616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.679650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.679891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.679925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.680096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.680131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.680327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.680361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.680647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.680681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.680816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.819 [2024-12-15 06:27:01.680850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.819 qpair failed and we were unable to recover it. 00:36:41.819 [2024-12-15 06:27:01.680965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.681007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.681270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.681314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.681442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.681476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.681780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.681813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.682068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.682102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.682298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.682332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.682620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.682655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.682786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.682820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.682947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.682982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.683259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.683293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.683436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.683471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.683737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.683771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.683953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.683987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.684189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.684224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.684406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.684439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.684777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.684812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.684930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.684965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.685158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.685195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.685458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.685789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.685823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.685954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.685988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.686137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.686170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.686302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.686336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.686456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.686490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.686702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.686737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.686929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.686964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.687169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.687205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.687394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.687428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.687704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.687777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.688081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.688120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.688258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.688293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.688569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.688603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.688821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.688856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.689074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.689110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.689298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.689333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.689467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.689501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.689688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.689722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.689931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.689965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.690165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.690200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.690397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.690431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.690626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.820 [2024-12-15 06:27:01.690661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.820 qpair failed and we were unable to recover it. 00:36:41.820 [2024-12-15 06:27:01.690903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.690947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.691100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.691136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.691334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.691368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.691486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.691520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.691712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.691746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.691940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.691974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.692126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.692161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.692416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.692450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.692724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.692759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.693029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.693064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.693256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.693291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.693484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.693519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.693726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.693761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.693874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.693909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.694188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.694224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.694490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.694524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.694732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.694767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.695011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.695046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.695256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.695401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.695435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.695617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.695651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.695879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.696086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.696122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.696329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.696364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.696608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.696644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.696768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.696803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.696927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.696961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.697163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.697200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.697375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.697608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.697642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.697815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.697849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.698125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.698179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.698380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.698414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.698534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.698568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.698697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.698732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.698916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.698951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.699147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.699183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.699317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.699352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.699528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.699562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.699691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.700024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.821 [2024-12-15 06:27:01.700061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.821 qpair failed and we were unable to recover it. 00:36:41.821 [2024-12-15 06:27:01.700173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.700207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.700427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.700461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.700646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.700680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.700786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.700820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.701018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.701054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.701321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.701356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.701463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.701496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.701621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.701656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.701843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.701877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.702008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.702042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.702225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.702259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.702448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.702482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.702677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.702711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.702823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.702857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.703049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.703085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.703202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.703239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.703430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.703464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.703642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.703677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.703856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.703892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.704872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.704905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.706768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.706834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.706986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.707406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.707572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.707715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.707931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.707965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.708159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.708194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.708472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.708506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.708627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.708661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.708791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.708826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.709910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.822 [2024-12-15 06:27:01.709944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.822 qpair failed and we were unable to recover it. 00:36:41.822 [2024-12-15 06:27:01.710091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.710126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.710366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.710400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.710533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.710567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.710705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.710739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.710919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.710954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.712305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.712352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.712563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.712596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.712790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.712822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.712943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.712974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.713286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.713401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.713433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.713601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.713632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.713794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.713929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.713963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.714219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.714294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.714453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.714490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.714601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.714637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.714764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.714798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.714932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.715268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.715304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.715520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.715554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.715727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.715762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.715895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.715928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.716144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.716190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.716414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.716448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.716583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.716617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.716897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.716931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.717956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.717990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.718179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.718213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.718346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.718380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.718569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.718603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.718727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.823 [2024-12-15 06:27:01.718773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.823 qpair failed and we were unable to recover it. 00:36:41.823 [2024-12-15 06:27:01.718971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.719017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.719225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.719259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.719526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.719559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.719738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.719771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.719966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.720010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.720199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.720233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.720365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.720399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.720574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.720608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.720852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.720885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.721028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.721064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.721243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.721278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.721468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.721502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.721612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.721646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.721898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.721933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.722947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.722981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.723888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.723921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.724044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.724086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.724198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.724233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.724513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.724547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.724662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.724697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.724891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.724927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.725109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.725144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.725315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.725350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.725573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.725608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.725731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.725765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.725964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.726010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.726156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.726190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.726373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.726407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.726601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.726635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.726823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.726858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.727009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.824 [2024-12-15 06:27:01.727044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.824 qpair failed and we were unable to recover it. 00:36:41.824 [2024-12-15 06:27:01.727170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.727206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.727367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.727559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.727593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.727764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.727908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.727941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.728149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.728186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.728382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.728417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.728527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.728560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.728678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.728711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.728845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.728879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.729913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.730045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.730081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.730274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.730309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.730488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.730522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.730643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.730788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.730821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.731060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.731229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.731775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.732022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.732274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.732313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.732455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.732489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.732662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.732695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.732872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.733040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.733075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.733192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.733444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.733478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.733719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.733752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.733874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.733907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.734939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.825 [2024-12-15 06:27:01.734972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.825 qpair failed and we were unable to recover it. 00:36:41.825 [2024-12-15 06:27:01.736781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.736839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.736981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.737030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.737221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.737256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.737388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.737696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.737729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.737955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.737988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.738296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.738330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.738577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.738611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.738752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.738787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.739026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.739060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.739195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.739229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.739457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.739490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.739800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.739836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.740113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.740148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.740416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.740451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.740570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.740603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.740802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.740836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.741017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.741052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.741228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.741262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.741395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.741427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.741632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.741666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.741866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.741900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.742138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.742172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.742308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.742349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.742537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.742570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.742690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.742724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.743022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.743058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.743249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.743283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.743548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.743581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.743754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.743788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.744010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.744044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.744312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.744346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.744478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.744511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.744721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.744755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.744962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.745006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.745281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.745516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.745550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.745734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.745768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.746042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.746079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.746282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.746316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.746513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.746547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.746665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.746699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.826 qpair failed and we were unable to recover it. 00:36:41.826 [2024-12-15 06:27:01.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.826 [2024-12-15 06:27:01.747011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.747197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.747230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.747480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.747513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.747749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.747783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.747900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.747934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.748189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.748226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.748492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.748526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.748782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.748816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.749084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.749120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.749324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.749357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.749554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.749587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.749777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.749811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.750011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.750046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.750315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.750350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.750482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.750653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.750687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.750910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.750946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.751195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.751229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.751374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.751408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.751544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.751838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.751874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.752066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.752108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.752346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.752381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.752647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.752680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.752870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.752904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.753130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.753165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.753300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.753334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.753546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.753580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.753776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.753809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.754072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.754107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.754315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.754348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.754492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.754524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.754822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.754854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.755049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.755084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.755213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.755245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.755383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.755417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.755535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.755569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.755838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.755871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.756080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.756113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.827 [2024-12-15 06:27:01.756240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.827 [2024-12-15 06:27:01.756273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.827 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.756429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.756462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.756588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.756620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.756864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.756897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.757018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.757059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.757197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.757231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.757416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.757680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.757715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.758050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.758217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.758386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.758611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.758835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.758970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.759013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.759198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.759231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.759441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.759475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.759707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.759740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.760920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.760960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.761186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.761222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.761350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.761383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.761517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.761551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.761739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.761772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.761955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.761988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.762191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.762225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.762369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.762403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.762597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.762630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.762892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.762926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.763092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.763127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.763263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.763295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.763488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.763523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.763761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.763977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.764020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.764157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.764190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.764312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.764346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.764482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.764515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.764713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.764746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.828 qpair failed and we were unable to recover it. 00:36:41.828 [2024-12-15 06:27:01.765012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.828 [2024-12-15 06:27:01.765046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.765239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.765275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.765425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.765460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.765773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.767591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.767648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.767882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.767916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.768093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.768130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.768302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.768531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.768567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.768755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.768789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.768963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.769022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.769220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.769254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.769388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.769422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.769615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.769649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.769924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.769959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.770099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.770133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.770289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.770324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.770551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.770590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.770724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.770758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.771011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.771046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.771157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.771192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.771309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.771349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.771574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.771608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.771874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.771908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.772102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.772138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.772328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.772559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.772593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.772793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.772828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.773055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.773090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.773333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.773501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.773535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.773717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.773751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.774024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.774058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.774192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.774227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.774356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.774392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.774543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.774578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.774820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.774855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.775047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.775082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.775260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.775293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.775418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.775452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.775671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.775706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.776002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.776037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.829 qpair failed and we were unable to recover it. 00:36:41.829 [2024-12-15 06:27:01.776225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.829 [2024-12-15 06:27:01.776259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.776404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.776438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.776816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.776850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.777037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.777222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.777257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.777401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.777435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.777683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.777718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.779167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.779221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.779401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.779437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.779570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.779602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.779744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.780016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.780064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.780292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.780343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.780661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.780710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.781020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.781070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.781254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.781301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.781468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.781504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.781735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.781768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.782028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.782064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.782251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.782292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.782486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.782520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.782724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.782756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.782947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.782981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.783135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.783171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.783316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.783350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.783546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.783580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.783795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.783829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.784140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.784176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.784297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.784331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.784510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.784545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.784780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.784813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.785045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.785082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.785301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.785484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.785518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.785738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.785771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.786057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.786198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.786230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.786373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.786406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.786604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.786636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.786815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.786848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.787090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.787126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.830 [2024-12-15 06:27:01.787361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.830 [2024-12-15 06:27:01.787395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.830 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.787655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.787688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.787870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.787904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.788117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.788153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.788287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.788321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.788589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.788805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.788843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.788989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.789181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.789367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.789587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.789736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.789920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.789954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.790158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.790193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.790324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.790481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.790514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.790637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.790671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.790817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.790850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.791076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.791250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.791401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.791553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.791787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.791968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.792010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.792135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.792169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.792414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.792448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.792636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.792670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.792862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.792895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.793951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.793985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.794193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.794228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.794366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.794401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.794531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.794565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.794746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.794781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.794899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.794933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.795064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.795100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.795406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.795440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.795637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.795671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.795854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.831 [2024-12-15 06:27:01.795889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.831 qpair failed and we were unable to recover it. 00:36:41.831 [2024-12-15 06:27:01.796102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.796137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.796254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.796288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.796468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.796502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.796704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.796738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.796930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.796963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.797918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.797952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.798085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.798120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.798391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.798425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.798614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.798647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.798846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.798881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.799074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.799109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.799339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.799537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.799577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.799700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.799735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.799844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.799879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.800873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.800907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.801865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.801901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.802010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.802046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.802152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.802368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.802401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.802609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.802642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.802779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.802814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.832 [2024-12-15 06:27:01.803017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.832 [2024-12-15 06:27:01.803051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.832 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.803160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.803194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.803375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.803408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.803537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.803571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.803769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.803804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.803981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.804145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.804297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.804446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.804585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.804803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.804837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.805901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.805934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.806932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.807155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.807188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.807434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.807468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.807598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.807631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.807765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.807799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.807977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.808215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.808367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.808527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.808687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.808831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.808863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.809040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.809075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.809262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.809297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.809496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.809530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.809706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.809739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.809929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.809962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.810102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.810136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.810322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.810355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.810537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.810569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.810674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.810708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.810841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.833 [2024-12-15 06:27:01.810875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.833 qpair failed and we were unable to recover it. 00:36:41.833 [2024-12-15 06:27:01.811137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.811173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.811371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.811405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.811615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.811648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.811822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.812068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.812103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.812295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.812330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.812545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.812579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.812728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.812761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.812953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.812988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.813135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.813168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.813429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.813463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.813728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.814034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.814070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.814200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.814233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.814412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.814445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.814659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.814692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.814844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.814877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.815005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.815040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.815172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.815207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.815417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.815450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.815702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.815735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.815929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.816180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.816214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.816480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.816513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.816629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.816662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.816836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.816869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.817117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.817152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.817270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.817304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.817509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.817648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.817681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.817895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.817929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.818205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.818245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.818515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.818548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.818878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.819330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.819364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.819546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.819579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.819712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.819745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.820013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.820048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.820239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.820272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.820473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.820507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.820690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.820724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.834 [2024-12-15 06:27:01.820964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.834 [2024-12-15 06:27:01.821008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.834 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.821172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.821210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.821418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.821453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.821641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.821674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.821960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.822225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.822259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.822451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.822484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.822747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.822781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.822953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.822986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.823137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.823170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.823400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.823678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.823712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.823887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.823920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.824146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.824286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.824497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.824529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.824678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.824717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.824928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.824961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.825109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.825144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.825335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.825367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.825522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.825556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.825769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.825802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.825988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.826170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.826204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.826326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.826360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.826539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.826572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.826858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.826891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.827044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.827080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.827266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.827513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.827546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.827822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.827856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.828046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.828081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.828268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.828301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.828544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.828577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.828788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.828821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.829087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.829122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.829251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.829285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.829474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.829507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.829695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.829727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.829929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.829962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.830185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.830219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.830483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.835 [2024-12-15 06:27:01.830515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.835 qpair failed and we were unable to recover it. 00:36:41.835 [2024-12-15 06:27:01.830790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.830824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.831024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.831059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.831288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.831321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.831563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.831785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.831818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.831940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.831972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.832166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.832300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.832333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.832516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.832549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.832747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.832780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.833023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.833211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.833245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.833370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.833403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.833506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.833723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.833755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.834014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.834049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.834246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.834280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.834469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.834503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.834704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.834738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.834943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.834975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.835181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.835215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.835337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.835370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.835495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.835528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.835734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.835767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.836039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.836074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.836267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.836300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.836432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.836464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.836587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.836621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.836828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.837065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.837189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.837224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.837341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.837374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.837626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.837871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.837905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.838087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.838122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.838317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.838352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.838538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.836 [2024-12-15 06:27:01.838572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.836 qpair failed and we were unable to recover it. 00:36:41.836 [2024-12-15 06:27:01.838757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.838790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.838968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.839013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.839224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.839258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.839400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.839433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.839689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.839721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.839988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.840038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.840239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.840273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.840464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.840497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.840678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.840712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.840842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.841173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.841209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.841367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.841400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.841637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.841926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.841960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.842173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.842208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.842353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.842387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.842505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.842538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.842724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.842757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.842878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.842912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.843066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.843102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.843295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.843329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.843512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.843545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.843750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.843783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.844097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.844133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.844288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.844323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.844563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.844597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.844827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.844862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.845047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.845083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.845321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.845354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.845475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.845508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.845636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.845671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.845874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.845908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.846159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.846200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.846396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.846429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.846651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.846685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.846868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.846903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.847145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.847340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.847374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.847567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.847601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.847862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.848088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.848124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.848318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.848351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.837 qpair failed and we were unable to recover it. 00:36:41.837 [2024-12-15 06:27:01.848597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.837 [2024-12-15 06:27:01.848629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.848843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.849128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.849163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.849533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.849568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.849903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.849939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.850181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.850217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.850357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.850391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.850567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.850600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.850801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.850836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.851037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.851073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.851294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.851328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.851467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.851501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.851793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.851827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.852070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.852105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.852310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.852345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.852589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.852621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.852875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.853059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.853380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.853414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.853632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.853667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.853853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.853887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.854121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.854156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.854396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.854430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.854626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.854660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.854936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.855261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.855477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.855512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.855839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.856133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.856168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.856295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.856631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.856663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.856858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.856892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.857032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.857066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.857332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.857366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.857555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.857589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.857698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.857733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.857917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.857949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.859475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.859534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.859850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.859884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.860160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.860197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.860342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.860376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.860517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.838 [2024-12-15 06:27:01.860550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.838 qpair failed and we were unable to recover it. 00:36:41.838 [2024-12-15 06:27:01.860819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.860854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.861082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.861220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.861253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.861465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.861499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.861702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.861736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.861978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.862021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.862218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.862252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.862440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.862474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.862764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.863083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.863119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.863249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.863283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.863502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.863534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.863751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.863786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.863920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.863954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.864199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.864402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.864482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.864782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.864817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.865020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.865057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.865323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.865357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.865538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.865570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.865787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.865821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.866016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.866050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.866298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.866331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.866532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.866565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.866778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.866811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.867001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.867036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.867163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.867196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.867492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.867525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.867758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.867801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.868010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.868045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.868195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.868227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.868421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.868454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.868661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.868693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.868881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.868913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.869117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.869151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.869369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.869403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.869539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.869845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.869878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.870112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.870147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.870283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.870315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.870434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.870467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.839 [2024-12-15 06:27:01.870777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.839 [2024-12-15 06:27:01.870815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.839 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.871115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.871150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.871346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.871379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.871573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.871607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.871826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.871859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.872005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.872039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.872236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.872270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.872568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.872601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.872809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.873037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.873073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.873228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.873261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.873408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.873441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.873707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.873741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.873922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.874169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.874231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.874479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.874796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.874828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.875024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.875059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.875255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.875287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.875535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.875568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.875846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.875878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.876114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.876149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.876381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.876712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.876747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.877018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.877053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.877208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.877241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.877381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.877413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.877694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.877727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.878026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.878362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.878394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.878621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.878655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.878932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.878964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.879121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.879157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.879362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.879396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.879623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.879907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.879941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.880179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.880214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.880343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.840 [2024-12-15 06:27:01.880376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.840 qpair failed and we were unable to recover it. 00:36:41.840 [2024-12-15 06:27:01.880522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.880556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.880685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.880967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.881007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.881154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.881190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.881338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.881372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.881542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.881740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.881771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.881968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.882162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.882195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.882398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.882430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.882683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.882717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.882853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.882887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.883080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.883114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.883387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.883421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.883631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.883663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.883851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.883884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.884079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.884115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.884423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.884735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.884768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.884985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.885031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.885223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.885256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.885405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.885437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.885693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.885727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.885972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.886015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.886153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.886188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.886414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.886448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.886720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.886753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.887017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.887224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.887257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.887536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.887811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.887853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.888121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.888158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.888350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.888383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.888533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.888566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.888812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.888846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.889048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.889083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.889247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.889280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.889486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.889519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.889740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.889774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.889954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.889986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.890238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.890424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.841 [2024-12-15 06:27:01.890457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.841 qpair failed and we were unable to recover it. 00:36:41.841 [2024-12-15 06:27:01.890611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.890644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.890795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.890829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.891081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.891307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.891343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.891527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.891738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.891774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.892074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.892108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.892301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.892334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.892559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.892593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.892913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.892944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.893202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.893236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.893425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.893458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.893618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.893829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.893863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.894008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.894043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.894240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.894280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.894501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.894534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.894759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.894793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.895008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.895043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.895210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.895434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.895468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.895713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.895746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.895934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.895968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.896283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.896318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.896510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.896542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.896777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.896811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.897065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.897100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.897253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.897286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.897465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.897769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.897803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.898044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.898195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.898229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.898374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.898408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.898684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.898718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.899037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.899073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.899213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.899246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.899393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.899426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.899662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.899694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.899935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.899969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.900143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.900178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.900372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.900405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.900543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.900576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.842 [2024-12-15 06:27:01.900698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.842 [2024-12-15 06:27:01.900731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.842 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.900987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.901037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.901180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.901213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.901346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.901378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.901576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.901616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.901812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.901845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.902818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.902853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.903929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.903964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.904931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.904966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.905113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.905336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.905368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.905503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.905537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.905666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.905700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.905899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.905933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.906178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.906215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.906398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.906728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.906936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.906970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.907182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.907217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.907419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.907453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.907589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.907623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.907788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.907914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.907947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.908114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.908148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.908280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.908313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.908570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.908611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.908789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.843 [2024-12-15 06:27:01.908824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.843 qpair failed and we were unable to recover it. 00:36:41.843 [2024-12-15 06:27:01.908968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.909015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.909273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.909308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.909492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.909527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.909728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.909761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.909945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.910149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.910272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.910306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.910563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.910598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.910725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.910759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.910970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.911162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.911314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.911486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.911704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.911847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.911886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.912017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.912052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.912257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.912291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.912519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.912552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.912755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.912789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.912925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.912959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.913156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.913190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.913378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.913413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.913554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.913587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.913788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.913822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.913961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.914201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.914370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.914585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.914927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.914959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.915089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.915123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.915352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.915383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.915524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.915553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.915676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.915706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.915820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.915852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.916040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.916071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.916179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.916209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.916412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.916443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.844 qpair failed and we were unable to recover it. 00:36:41.844 [2024-12-15 06:27:01.916570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.844 [2024-12-15 06:27:01.916616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.916741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.916771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.917075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.917290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.917514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.917806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.917980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.918276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.918441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.918628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.918798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.918943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.918975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.919152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.919183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.919362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.919397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.919583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.919612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.919726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.919756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.919887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.919916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.920083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.920221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.920428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.920573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.920979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.921020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.921189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.921219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.921405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.921433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.921535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.921565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.921751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.921780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.922116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.922281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.922320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.922455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.922488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.922612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.922644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.922836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.922872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.923161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.923195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.923339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.923371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.923569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.923603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.923714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.923747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.923978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.924118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.924152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.924281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.924314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.924506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.924539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.924733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.924779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.845 [2024-12-15 06:27:01.924908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.845 [2024-12-15 06:27:01.924940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.845 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.925159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.925317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.925351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.925544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.925577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.925701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.925733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.925945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.925978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.926173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.926206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.926342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.926375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.926543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.926753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.926786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.926907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.926940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.927870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.927903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.928022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.928056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.928177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.928210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.928435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.928468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.928594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.928888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.928920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:41.846 [2024-12-15 06:27:01.929111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.846 [2024-12-15 06:27:01.929145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:41.846 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.929337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.929372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.929565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.929598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.929724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.929758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.929912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.929945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.930913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.930946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.931088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.931122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.931247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.931278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.931469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.931504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.931690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.931723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.931849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.931881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.932013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.932048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.932172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.136 [2024-12-15 06:27:01.932212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.136 qpair failed and we were unable to recover it. 00:36:42.136 [2024-12-15 06:27:01.932320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.932352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.932470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.932504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.932639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.932671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.932794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.932825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.932967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.933854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.933975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.934024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.934281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.934530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.934562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.934691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.934725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.934848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.934890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.935916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.935948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.936077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.936111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.936235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.936268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.936466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.936498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.936683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.936715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.936828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.936861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.937053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.937275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.937307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.937558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.937592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.937773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.937804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.937942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.937976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.938144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.938293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.938438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.938645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.938789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.938970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.939024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.137 [2024-12-15 06:27:01.939228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.137 [2024-12-15 06:27:01.939261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.137 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.939440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.939472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.939681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.939721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.939925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.939958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.940166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.940200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.940493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.940763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.940795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.941013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.941045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.941258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.941291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.941485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.941518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.941729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.941890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.941922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.942057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.942274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.942305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.942523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.942555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.942808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.942841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.942987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.943028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.943228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.943260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.943511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.943544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.943731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.943764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.943979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.944018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.944154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.944187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.944385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.944417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.944549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.944799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.944833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.945086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.945254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.945390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.945423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.945555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.945587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.945855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.945887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.946080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.946114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.946262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.946295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.946491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.946524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.946682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.946716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.947028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.947063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.947188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.947220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.947345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.947378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.138 [2024-12-15 06:27:01.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.138 [2024-12-15 06:27:01.947598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.138 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.947892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.947925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.948057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.948091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.948300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.948332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.948530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.948563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.949089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.949123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.949255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.949288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.949567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.949599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.949887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.949921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.950177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.950361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.950393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.950530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.950568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.950873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.950905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.951097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.951131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.951365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.951397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.951599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.951632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.951768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.951802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.952011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.952045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.952185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.952218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.952419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.952451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.952719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.952751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.952934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.952965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.953119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.953153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.953344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.953377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.953628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.953660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.953861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.953894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.954190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.954224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.954423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.954454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.954692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.954726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.954902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.954933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.955187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.955221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.955480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.955514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.955844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.955877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.956139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.956173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.956320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.956353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.139 [2024-12-15 06:27:01.956490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.139 [2024-12-15 06:27:01.956522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.139 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.956719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.956750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.957026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.957060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.957212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.957244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.957490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.957522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.957709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.957744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.958024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.958246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.958279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.958479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.958513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.958814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.958853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.959041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.959075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.959212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.959245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.959380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.959412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.959591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.959622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.959750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.959783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.960031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.960065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.960218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.960251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.960401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.960434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.960556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.960588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.960859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.960891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.961118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.961372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.961404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.961514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.961818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.961853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.962060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.962094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.962315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.962347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.962485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.962518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.962741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.962774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.963037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.963071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.963344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.963378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.963514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.963546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.963807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.963839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.964035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.964070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.964560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.964592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.964902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.964935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.965106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.965139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.965282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.965315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.965464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.140 [2024-12-15 06:27:01.965515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.140 qpair failed and we were unable to recover it. 00:36:42.140 [2024-12-15 06:27:01.965713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.965746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.965894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.965927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.966202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.966237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.966430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.966464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.966783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.967058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.967093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.967293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.967324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.967464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.967496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.967710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.967744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.968029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.968062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.968264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.968304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.968440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.968473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.968707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.968740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.968917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.968950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.969183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.969217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.969349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.969382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.969508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.969541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.969826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.969860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.970009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.970043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.970166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.970199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.970478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.970511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.970822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.970854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.971127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.971162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.971447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.971482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.971741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.971948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.971981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.972145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.972183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.972407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.972438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.972690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.972722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.972933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.972966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.973151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.973185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.141 [2024-12-15 06:27:01.973383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.141 [2024-12-15 06:27:01.973415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.141 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.973566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.973598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.973899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.973933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.974155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.974189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.974331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.974363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.974506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.974538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.974774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.974817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.975016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.975049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.975247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.975280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.975413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.975446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.975705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.975737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.975924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.975959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.976157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.976328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.976361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.976561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.976594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.976817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.976850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.977049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.977083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.977218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.977251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.977501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.977536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.977838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.978149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.978184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.978321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.978354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.978480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.978512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.978739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.978772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.978985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.979045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.979250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.979282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.979395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.979429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.979657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.979690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.979946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.979979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.980137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.980170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.980377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.980409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.980658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.980691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.980939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.980972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.981114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.981147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.981333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.981367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.981656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.981689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.981985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.982030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.982234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.982267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.982417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.982450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.142 qpair failed and we were unable to recover it. 00:36:42.142 [2024-12-15 06:27:01.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.142 [2024-12-15 06:27:01.982617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.982798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.982831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.983080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.983115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.983296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.983329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.983527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.983560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.983855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.983889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.984070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.984105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.984258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.984297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.984484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.984516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.984835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.984867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.985161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.985195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.985324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.985356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.985512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.985545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.985745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.985779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.986048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.986083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.986219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.986251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.986401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.986435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.986736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.986768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.986971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.987036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.987189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.987222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.987384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.987419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.987582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.987615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.987827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.987862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.988958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.988991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.989243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.989276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.989498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.989530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.989656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.989688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.989886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.989921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.990111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.990146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.990348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.990381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.990548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.990581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.990774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.990808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.991082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.991117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.143 [2024-12-15 06:27:01.991305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.143 [2024-12-15 06:27:01.991338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.143 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.991520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.991552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.991840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.991874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.992151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.992186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.992409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.992442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.992724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.992757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.992899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.992930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.993207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.993240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.993367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.993399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.993608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.993646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.993841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.993875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.994025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.994059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.994211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.994243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.994498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.994531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.994723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.994756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.994943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.994975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.995195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.995230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.995390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.995423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.995667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.995701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.995955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.995989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.996206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.996239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.996440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.996472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.996821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.996853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.997053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.997088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.997317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.997349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.997525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.997557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.997740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.997773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.998035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.998071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.998218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.998435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.998466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.998615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.998647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.998842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.999071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.999105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.999252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.999285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.999425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.999458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.999679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:01.999711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.144 [2024-12-15 06:27:01.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.144 [2024-12-15 06:27:02.000015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.144 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.000222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.000384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.000417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.000682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.000880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.000914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.001167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.001205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.001386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.001420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.001617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.001650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.001843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.001878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.002107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.002143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.002339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.002372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.002567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.002601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.002899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.002935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.003196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.003237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.003435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.003468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.003712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.003746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.003949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.003983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.004115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.004150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.004431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.004465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.004611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.004644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.004770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.004803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.004950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.004983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.005204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.005237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.005434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.005466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.005600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.005634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.005762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.005798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.006006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.006042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.006196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.006229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.006482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.006514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.006728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.006762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.006896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.006929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.007136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.007170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.007425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.007458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.007675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.145 [2024-12-15 06:27:02.007714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.145 qpair failed and we were unable to recover it. 00:36:42.145 [2024-12-15 06:27:02.007980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.008036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.008153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.008186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.008388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.008423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.008681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.008713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.008832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.008866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.009136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.009171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.009459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.009492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.009626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.009658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.009887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.009918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.010086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.010293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.010326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.010544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.010731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.010763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.010948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.010982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.011245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.011279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.011481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.011515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.011702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.011735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.011861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.011894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.012045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.012079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.012202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.012240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.012449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.012482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.012681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.012714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.012970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.013013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.013315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.013348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.013564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.013597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.013740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.013773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.014001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.014035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.014160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.014391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.014628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.014661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.014922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.014955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.015151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.015187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.015420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.015571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.015794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.015826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.015937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.015970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.016272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.146 [2024-12-15 06:27:02.016306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.146 qpair failed and we were unable to recover it. 00:36:42.146 [2024-12-15 06:27:02.016565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.016598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.016902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.016936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.017152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.017186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.017315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.017348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.017553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.017585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.017899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.017934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.018145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.018180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.018366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.018400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.018548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.018581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.018919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.019063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.019098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.019377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.019411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.019636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.019669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.019947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.020264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.020298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.020554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.020587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.020790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.020824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.021039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.021073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.021272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.021307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.021514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.021550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.021673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.021705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.021982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.022025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.022205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.022249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.022459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.022493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.022768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.022800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.023075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.023111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.023417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.023450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.023730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.023763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.024074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.024109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.024363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.024394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.024547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.024581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.024857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.024889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.025168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.025201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.025343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.025376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.025683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.025961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.026004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.026250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.026385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.147 [2024-12-15 06:27:02.026418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.147 qpair failed and we were unable to recover it. 00:36:42.147 [2024-12-15 06:27:02.026554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.026586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.026857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.026890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.027175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.027208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.027421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.027454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.027683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.027716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.027868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.027902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.028214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.028248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.028369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.028402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.028687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.029073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.029108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.029302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.029336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.029540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.029573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.029847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.029882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.030090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.030124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.030412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.030446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.030706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.030740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.030959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.030999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.031158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.031194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.031396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.031429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.031633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.031667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.031892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.031925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.032219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.032252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.032456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.032489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.032677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.032715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.032969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.033014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.033223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.033257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.033448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.033484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.033667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.033700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.033885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.033918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.034119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.034153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.034431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.034463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.034718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.034750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.035032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.035067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.035251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.035283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.035480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.035512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.035709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.035741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.035971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.036026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.148 qpair failed and we were unable to recover it. 00:36:42.148 [2024-12-15 06:27:02.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.148 [2024-12-15 06:27:02.036207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.036392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.036425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.036623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.036655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.036861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.036894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.037120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.037153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.037280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.037313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.037448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.037481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.037731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.037764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.037983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.038029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.038308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.038340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.038482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.038515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.038778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.038810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.039101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.039295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.039327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.039529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.039562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.039842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.039875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.040178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.040212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.040514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.040547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.040780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.040812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.041007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.041040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.041315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.041347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.041533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.041564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.041748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.041780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.042044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.042077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.042332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.042364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.042667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.042699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.042894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.042932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.043225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.043259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.043407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.043439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.043658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.043692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.043972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.044013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.044290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.044323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.044616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.044650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.044926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.044959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.045256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.045290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.045557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.149 [2024-12-15 06:27:02.045588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.149 qpair failed and we were unable to recover it. 00:36:42.149 [2024-12-15 06:27:02.045890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.045923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.046152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.046334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.046366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.046500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.046532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.046820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.046853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.046971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.047203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.047236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.047513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.047545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.047797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.047829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.048084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.048119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.048232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.048264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.048569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.048763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.048795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.049063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.049097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.049381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.049414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.049597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.049629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.049825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.049858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.050152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.050187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.050332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.050364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.050487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.050519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.050818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.050851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.051118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.051152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.051282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.051314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.051464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.051497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.051757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.051790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.052086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.052268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.052301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.052555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.052586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.052776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.052808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.053134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.053167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.053389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.053428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.150 [2024-12-15 06:27:02.053701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.150 [2024-12-15 06:27:02.053734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.150 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.054043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.054078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.054327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.054361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.054557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.054589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.054914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.054947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.055248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.055558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.055592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.055823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.055856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.056048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.056081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.056235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.056269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.056543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.056576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.056781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.056813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.057048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.057357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.057390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.057672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.057705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.057955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.057988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.058199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.058232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.058501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.058534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.058673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.058707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.058897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.058929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.059047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.059080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.059335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.059368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.059641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.059674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.059874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.059908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.060093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.060128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.060261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.060294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.060707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.061069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.061344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.061379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.061604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.061639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.061775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.061807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.062086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.062121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.062326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.062358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.062866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.062898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.063096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.063131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.151 [2024-12-15 06:27:02.063335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.151 [2024-12-15 06:27:02.063367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.151 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.063587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.063621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.063909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.064215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.064260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.064473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.064505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.064692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.064724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.064952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.064984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.065186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.065473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.065682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.065714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.065965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.066005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.066268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.066301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.066485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.066518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.066781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.066814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.067079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.067114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.067315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.067348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.067628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.067661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.067873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.067907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.068123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.068159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.068350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.068384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.068638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.068869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.068903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.069181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.069216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.069424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.069456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.069691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.069725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.070012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.070048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.070266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.070299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.070525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.070558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.070745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.070789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.071075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.071109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.071333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.071369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.071589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.071621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.071821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.071854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.072114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.072149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.072360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.072392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.072655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.072689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.072900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.072935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.073171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.073207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.152 [2024-12-15 06:27:02.073459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.152 [2024-12-15 06:27:02.073492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.152 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.073745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.073779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.074121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.074255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.074288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.074571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.074804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.074843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.075035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.075071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.075268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.075301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.075488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.075521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.075832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.075864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.076127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.076162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.076428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.076462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.076603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.076636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.076842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.076875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.077085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.077121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.077244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.077276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.077555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.077589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.077774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.077810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.078018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.078053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.078248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.078283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.078485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.078520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.078725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.078757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.078942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.078974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.079178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.079212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.079410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.079442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.079666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.079698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.079807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.079841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.080058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.080092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.080367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.080400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.080587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.080619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.080801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.080834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.081118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.081155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.081346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.081381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.081635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.081668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.081889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.081922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.082108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.082142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.082349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.082381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.082586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.082619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.082822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.082854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.153 qpair failed and we were unable to recover it. 00:36:42.153 [2024-12-15 06:27:02.083121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.153 [2024-12-15 06:27:02.083155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.083439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.083470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.083607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.083642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.084063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.084097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.084278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.084310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.084585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.084624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.084844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.084969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.085011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.085239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.085274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.085473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.085505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.085710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.085744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.085949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.085983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.086206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.086238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.086497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.086528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.086784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.086818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.086954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.086989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.087207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.087242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.087455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.087488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.087695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.087727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.088915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.088948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.089232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.089266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.089474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.089507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.089686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.089718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.089904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.089937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.090071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.090105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.090223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.090256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.090392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.090425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.090714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.090749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.090864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.090896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.154 qpair failed and we were unable to recover it. 00:36:42.154 [2024-12-15 06:27:02.091049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.154 [2024-12-15 06:27:02.091083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.091304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.091335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.091587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.091620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.091819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.091852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.092054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.092090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.092309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.092341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.092486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.092519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.092769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.092801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.093029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.093308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.093341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.093533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.093566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.093748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.093788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.094049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.094084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.094284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.094317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.094576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.094608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.094728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.094761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.094904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.094937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.095076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.095109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.095296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.095328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.095441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.095473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.095674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.095707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.095902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.095933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.096070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.096103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.096239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.096271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.096401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.096433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.096571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.096602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.096843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.096875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.097156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.097190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.097473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.097505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.097692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.097725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.098016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.098051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.098227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.098505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.098537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.098696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.155 [2024-12-15 06:27:02.098728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.155 qpair failed and we were unable to recover it. 00:36:42.155 [2024-12-15 06:27:02.098860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.098890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.099083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.099117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.099300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.099331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.099536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.099568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.099836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.099874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.100168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.100202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.100402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.100434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.100592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.100624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.100809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.100843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.100965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.101010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.101288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.101321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.101502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.101533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.101820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.101852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.102138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.102173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.102367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.102400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.102583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.102615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.102925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.103209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.103243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.103528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.103561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.103891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.103923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.104122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.104155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.104301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.104333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.104639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.104671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.104909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.104943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.105239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.105568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.105605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.105895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.105928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.106129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.106164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.106423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.106456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.106637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.106669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.106863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.106894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.107116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.107151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.107478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.107510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.107711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.107743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.107947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.108209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.108241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.108524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.108556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.108805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.108837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.156 [2024-12-15 06:27:02.109118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.156 [2024-12-15 06:27:02.109152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.156 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.109406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.109437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.109730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.109763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.110060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.110094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.110228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.110260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.110572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.110841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.110878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.111152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.111187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.111404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.111436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.111690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.111722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.111906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.111939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.112244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.112277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.112459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.112490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.112778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.112811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.113013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.113046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.113282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.113315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.113501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.113532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.113801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.113833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.113978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.114019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.114295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.114328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.114483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.114516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.114753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.115055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.115089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.115366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.115399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.115623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.115656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.115862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.115898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.116150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.116186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.116299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.116332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.116518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.116549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.116682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.116713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.116926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.116958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.117228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.117260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.117541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.117573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.117770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.117803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.118014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.118048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.118253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.118561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.118594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.157 qpair failed and we were unable to recover it. 00:36:42.157 [2024-12-15 06:27:02.118896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.157 [2024-12-15 06:27:02.118928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.119157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.119191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.119376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.119409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.119556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.119587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.119948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.120157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.120190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.120391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.120423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.120625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.120657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.120855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.120886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.121139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.121178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.121481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.121514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.121726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.121758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.121955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.121988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.122179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.122211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.122494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.122527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.122680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.122713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.122919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.122951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.123162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.123341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.123591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.123627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.123850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.123883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.124138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.124174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.124392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.124425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.124657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.124690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.124969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.125010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.125169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.125205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.125417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.125449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.125716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.125749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.125953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.125986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.126277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.126311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.126585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.126620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.126823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.126856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.127048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.127082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.127332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.127364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.127565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.127598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.127740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.127772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.128007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.128040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.128246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.158 [2024-12-15 06:27:02.128279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.158 qpair failed and we were unable to recover it. 00:36:42.158 [2024-12-15 06:27:02.128580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.128612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.128828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.128859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.129127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.129162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.129366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.129397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.129514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.129547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.129784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.129816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.130032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.130066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.130343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.130375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.130503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.130536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.130668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.130701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.130885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.130917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.131115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.131155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.131440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.131472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.131795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.131827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.132048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.132083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.132280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.132313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.132447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.132479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.132631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.132663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.132946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.133183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.133216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.133494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.133527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.133812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.133845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.133961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.134014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.134207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.134241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.134784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.135072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.135107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.135324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.135360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.135640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.135672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.135867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.135899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.136100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.136135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.136336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.136368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.136623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.159 [2024-12-15 06:27:02.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.159 qpair failed and we were unable to recover it. 00:36:42.159 [2024-12-15 06:27:02.136880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.136912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.137062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.137096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.137393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.137424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.137627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.137660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.137942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.137974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.138153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.138187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.138418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.138451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.138647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.138678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.138950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.138982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.139186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.139220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.139472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.139504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.139754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.139787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.140088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.140123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.140262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.140294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.140566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.140598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.140882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.140914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.141196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.141231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.141464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.141737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.141775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.141967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.142008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.142274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.142306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.142456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.142488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.142742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.142774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.143031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.143310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.143343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.143634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.143666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.143943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.143976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.144193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.144225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.144505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.144538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.144720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.144752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.144973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.145017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.145321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.145353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.145557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.145590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.145840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.145872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.146116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.146151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.146306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.146338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.146556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.146589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.146886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.160 [2024-12-15 06:27:02.146917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.160 qpair failed and we were unable to recover it. 00:36:42.160 [2024-12-15 06:27:02.147194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.147227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.147361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.147394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.147577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.147608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.147861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.147894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.148174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.148208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.148525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.148557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.148770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.148803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.149001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.149035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.149190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.149222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.149472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.149829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.149863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.150081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.150117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.150322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.150354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.150532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.150563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.150792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.150824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.151020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.151053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.151235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.151267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.151556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.151588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.151870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.151902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.152170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.152202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.152453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.152491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.152722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.152754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.153019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.153309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.153341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.153592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.153625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.153844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.153876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.154134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.154168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.154378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.154411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.154602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.154634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.154914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.155238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.155272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.155528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.155713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.155744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.155934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.161 [2024-12-15 06:27:02.155966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.161 qpair failed and we were unable to recover it. 00:36:42.161 [2024-12-15 06:27:02.156239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.156272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.156462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.156494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.156673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.156706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.156982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.157022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.157302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.157335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.157667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.157893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.157925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.158182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.158216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.158435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.158467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.158592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.158624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.158845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.158876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.159060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.159094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.159297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.159329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.159541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.159573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.159688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.159720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.159862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.159894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.160019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.160053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.160261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.160293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.160479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.160511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.160711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.160744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.160939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.160971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.161261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.161294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.161493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.161525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.161724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.162 [2024-12-15 06:27:02.161755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.162 qpair failed and we were unable to recover it. 00:36:42.162 [2024-12-15 06:27:02.161957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.161990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.162235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.162267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.162459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.162730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.162762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.162893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.162924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.163201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.163236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.163374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.163406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.163603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.163634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.163755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.163786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.164012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.164044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.164194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.164225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.164339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.164371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.164624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.164656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.164884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.165067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.165100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.165300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.165331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.165520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.165552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.165800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.165831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.166036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.166070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.166271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.166304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.166579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.166610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.166861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.166894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.167074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.167109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.167319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.167352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.167577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.167610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.167863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.167896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.168084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.168118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.168306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.168338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.168542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.168575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.168727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.168758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.169021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.169055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.169311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.169344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.169468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.169499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.169698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.169730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.169935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.169967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.170191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.170224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.170497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.170529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.170824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.170868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.163 qpair failed and we were unable to recover it. 00:36:42.163 [2024-12-15 06:27:02.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.163 [2024-12-15 06:27:02.171093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.171206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.171237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.171432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.171462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.171611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.171643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.171894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.171931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.172123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.172155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.172342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.172555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.172588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.172835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.173077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.173111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.173377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.173409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.173627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.173659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.173856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.173887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.174081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.174344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.174580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.174612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.174869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.175104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.175139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.175410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.175442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.175627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.175658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.175872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.175904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.176159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.176194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.176411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.176444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.176568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.176600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.176722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.176754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.177228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.177261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.177535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.177567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.177706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.177739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.178072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.178106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.178303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.178336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.178560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.178717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.178750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.179022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.164 [2024-12-15 06:27:02.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.164 qpair failed and we were unable to recover it. 00:36:42.164 [2024-12-15 06:27:02.179259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.179290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.179547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.179800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.179832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.180056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.180089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.180212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.180243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.180455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.180488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.180765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.180797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.180923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.180954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.181247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.181280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.181579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.181611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.181791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.181828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.182124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.182158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.182379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.182411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.182616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.182649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.182910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.182943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.183156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.183189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.183493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.183526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.183667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.183699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.183876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.183908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.184182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.184216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.184438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.184470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.184584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.184616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.184724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.184755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.185032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.185065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.185196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.185228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.185338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.185369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.185507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.185538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.185721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.185754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.186028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.186061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.186188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.186221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.186490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.186522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.186749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.186782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.187051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.187085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.187217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.187507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.187538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.187678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.187709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.187835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.187866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.188079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.188114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.165 [2024-12-15 06:27:02.188325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.165 [2024-12-15 06:27:02.188359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.165 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.188496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.188527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.188721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.188753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.188891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.188922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.189263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.189493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.189635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.189787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.189970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.190011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.190273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.190305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.190581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.190614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.190815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.190853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.191118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.191152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.191335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.191368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.191642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.191674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.191848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.191880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.192098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.192132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.192328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.192359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.192649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.192681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.192924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.193059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.193091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.193360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.193393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.193590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.193621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.193884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.193916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.194102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.194136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.194317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.194553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.194586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.194729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.194761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.195031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.195065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.195217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.195248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.195488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.195737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.195971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.196015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.196141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.196173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.196305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.196336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.196512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.196545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.196771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.196803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.166 qpair failed and we were unable to recover it. 00:36:42.166 [2024-12-15 06:27:02.197069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.166 [2024-12-15 06:27:02.197104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.197398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.197586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.197618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.197808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.197844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.198016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.198050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.198174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.198207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.198427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.198460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.198645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.198678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.198878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.198911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.199096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.199130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.199398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.199430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.199651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.199682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.199959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.199999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.200113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.200151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.200272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.200305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.200547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.200670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.200706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.201015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.201049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.201327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.201359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.201501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.201532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.201670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.201702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.201891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.201923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.202035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.202068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.202249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.202281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.202477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.202509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.202725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.202756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.202943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.202974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.203204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.203238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.203486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.203518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.203798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.203830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.204048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.204081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.204268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.204301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.204518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.204549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.204736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.204769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.204949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.204980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.205203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.205336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.205367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.205509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.205733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.167 [2024-12-15 06:27:02.205764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.167 qpair failed and we were unable to recover it. 00:36:42.167 [2024-12-15 06:27:02.205873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.206104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.206181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.206332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.206370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.206496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.206530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.206661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.206694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.206904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.206939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.207162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.207197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.207324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.207358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.207631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.207664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.207899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.207932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.208146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.208181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.209723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.209781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.210078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.210114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.210303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.210336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.210525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.210566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.210786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.210819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.210940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.210972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.211157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.211478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.211511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.211693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.211725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.211918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.212181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.212215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.212336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.212371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.212624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.212658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.212917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.212953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.213077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.213116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.213319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.213351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.213622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.213656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.213791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.213824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.214017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.214051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.214282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.214424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.214456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.214706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.214815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.214847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.215112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.215148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.215270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.215302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.215440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.215472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.215743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.168 [2024-12-15 06:27:02.215776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.168 qpair failed and we were unable to recover it. 00:36:42.168 [2024-12-15 06:27:02.215894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.215926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.216101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.216135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.216312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.216344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.216546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.216580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.216710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.216741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.216931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.216964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.217111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.217145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.217413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.217635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.217781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.217814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.218039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.218263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.218417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.218793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.218981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.219024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.219162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.219318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.219351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.219543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.219577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.219844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.219877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.220062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.220098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.220375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.220407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.220524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.220557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.220739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.220771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.220958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.220997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.221281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.221313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.221488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.221521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.221714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.221747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.221885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.221917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.222189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.222221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.222343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.222375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.222498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.222531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.222666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.169 [2024-12-15 06:27:02.222698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.169 qpair failed and we were unable to recover it. 00:36:42.169 [2024-12-15 06:27:02.222945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.222976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.223160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.223194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.223330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.223363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.223485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.223517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.223704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.223738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.223917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.223952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.224113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.224148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.224403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.224436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.224568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.224603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.224782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.224814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.224988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.225037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.225228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.225261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.225455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.225487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.225684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.225716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.225838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.225870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.226864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.226896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.227091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.227126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.227367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.227578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.227610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.227756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.227791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.227967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.228007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.228147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.228180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.228382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.228414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.228592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.228625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.228877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.228910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.229044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.229078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.229261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.229294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.229471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.229505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.229697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.229729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.229867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.229901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.230012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.230048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.230251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.230283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.230487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.170 [2024-12-15 06:27:02.230520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.170 qpair failed and we were unable to recover it. 00:36:42.170 [2024-12-15 06:27:02.230656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.230688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.230815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.230847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.231064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.231307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.231454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.231610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.231767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.231963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.232027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.232167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.232199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.232319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.232352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.232477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.232511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.232796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.232828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.233934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.233965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.234157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.234190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.234302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.234335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.234527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.234562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.234802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.234835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.234964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.235118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.235273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.235415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.235722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.235894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.235926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.236135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.236169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.236370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.236402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.236513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.236545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.236722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.236757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.236873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.236907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.237081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.237116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.237307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.171 [2024-12-15 06:27:02.237339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.171 qpair failed and we were unable to recover it. 00:36:42.171 [2024-12-15 06:27:02.237587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.237620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.237813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.237845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.237957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.237989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.238180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.238213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.238469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.238583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.238615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.238745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.238778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.239955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.239986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.240287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.240320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.240526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.240558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.240664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.240695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.240803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.240840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.240981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.241045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.241226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.241257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.241467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.241498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.241672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.241704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.241890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.241922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.242927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.242958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.243133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.243325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.243357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.243476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.243508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.172 [2024-12-15 06:27:02.243753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.172 [2024-12-15 06:27:02.243785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.172 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.243964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.244008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.244135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.244169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.244312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.244344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.244584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.244616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.244796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.244827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.245037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.245072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.245212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.454 qpair failed and we were unable to recover it. 00:36:42.454 [2024-12-15 06:27:02.245340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.454 [2024-12-15 06:27:02.245372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.245557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.245588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.245770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.245803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.245936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.245969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.246162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.246197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.246312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.246344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.246474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.246505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.246631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.246662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.246839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.246871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.247082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.247116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.247234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.247265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.247377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.247408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.247543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.247575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.247772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.247805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.248072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.248107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.248284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.248496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.248529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.248799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.248943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.248982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.249120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.249152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.249334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.249367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.249560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.249593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.249709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.249740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.249867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.249899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.250021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.250054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.250172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.250205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.250317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.250348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.250620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.250653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.250787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.250819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.251051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.251210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.251385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.251606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.455 [2024-12-15 06:27:02.251776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.455 qpair failed and we were unable to recover it. 00:36:42.455 [2024-12-15 06:27:02.251953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.251985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.252144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.252177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.252378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.252411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.252602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.252634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.252834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.252866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.253055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.253087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.253196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.253229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.253406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.253439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.253544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.253577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.253712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.253743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.254006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.254040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.254293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.254327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.254502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.254533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.254756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.254864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.255098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.255137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.255347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.255466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.255497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.255696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.255728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.255909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.255942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.256099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.256133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.256319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.256351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.256557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.256588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.256693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.256730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.256905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.256938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.257130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.257165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.257300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.257332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.257643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.257834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.257864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.258054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.258088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.258266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.258300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.258502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.258535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.258655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.258686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.258812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.259043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.259079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.259267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.259299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.259428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.259459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.456 qpair failed and we were unable to recover it. 00:36:42.456 [2024-12-15 06:27:02.259651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.456 [2024-12-15 06:27:02.259684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.259895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.260046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.260272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.260305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.260478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.260510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.260636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.260668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.260792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.260824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.261064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.261109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.261222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.261255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.261380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.261411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.261583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.261616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.261876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.261910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.262871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.262902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.263073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.263108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.263296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.263520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.263552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.263819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.263979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.264156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.264379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.264533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.264677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.264894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.264925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.265872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.265903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.266079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.266112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.266282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.266316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.266432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.266466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.266660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.266693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.457 [2024-12-15 06:27:02.266907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.457 [2024-12-15 06:27:02.266938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.457 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.267518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.267669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.267881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.267984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.268048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.268240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.268272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.268389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.268419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.268598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.268630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.268876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.268907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.269097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.269130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.269325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.269359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.269530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.269565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.269921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.269953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.270074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.270109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.270292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.270325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.270511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.270542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.270668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.270699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.270892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.270925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.271912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.271944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.272128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.272168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.272357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.272582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.272613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.272792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.272824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.272946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.272978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.273183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.273216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.273504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.273537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.273720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.273752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.273864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.273897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.274077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.274110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.274329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.274360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.274498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.458 [2024-12-15 06:27:02.274531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.458 qpair failed and we were unable to recover it. 00:36:42.458 [2024-12-15 06:27:02.274770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.274802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.274923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.274955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.275148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.275180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.275285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.275316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.275432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.275465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.275664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.275694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.275936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.275968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.276275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.276436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.276467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.276652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.276684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.276881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.276914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.277048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.277082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.277272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.277433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.277465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.277641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.277673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.277788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.277820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.278924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.279080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.279114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.279292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.279324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.279443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.279475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.279713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.279747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.279919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.279951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.280259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.280293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.280412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.280457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.280630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.280662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.280846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.280877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.281008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.281042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.281172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.281206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.459 qpair failed and we were unable to recover it. 00:36:42.459 [2024-12-15 06:27:02.281398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.459 [2024-12-15 06:27:02.281430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.281614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.281834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.281865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.282935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.282968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.283105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.283138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.283273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.283306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.283513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.283544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.283788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.283820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.284016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.284050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.284259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.284292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.284502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.284535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.284746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.284780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.284905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.284936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.285083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.285116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.285383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.285415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.285523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.285554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.285683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.285715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.285891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.285924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.286852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.286884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.287956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.287990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.288230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.288272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.288467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.288500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.460 qpair failed and we were unable to recover it. 00:36:42.460 [2024-12-15 06:27:02.288611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.460 [2024-12-15 06:27:02.288643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.288829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.288861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.289080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.289114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.289292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.289446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.289476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.289649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.289680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.289873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.289905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.290117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.290239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.290271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.290374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.290406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.290584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.290616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.290794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.290826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.291919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.291951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.292088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.292122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.292395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.292428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.292602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.292634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.292805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.292837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.292963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.293005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.293112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.293144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.293380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.293411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.293592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.293664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.293901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.293937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.294181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.294216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.294399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.294432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.294549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.294581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.294708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.294741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.294981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.295036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.295247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.295279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.295436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.295468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.295726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.295759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.296026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.296061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.296250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.296282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.296551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.296583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.296772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.461 [2024-12-15 06:27:02.296814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.461 qpair failed and we were unable to recover it. 00:36:42.461 [2024-12-15 06:27:02.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.297041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.297162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.297194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.297313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.297346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.297613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.297646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.297765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.297797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.297989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.298031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.298316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.298349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.298461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.298493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.298613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.298645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.298749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.298782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.299006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.299040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.299223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.299256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.299452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.299485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.299732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.299764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.299935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.299968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.300100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.300134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.300303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.300335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.300464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.300496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.300735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.300767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.300885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.300917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.301046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.301081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.301207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.301239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.301422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.301455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.301629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.301660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.301876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.301908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.302079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.302112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.302285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.302356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.302504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.302539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.302758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.302792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.302964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.303013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.303149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.303181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.303355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.303587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.303619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.303793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.303825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.304035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.304069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.304251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.304282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.304524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.304556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.304733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.304765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.462 [2024-12-15 06:27:02.305022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.462 [2024-12-15 06:27:02.305056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.462 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.305323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.305539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.305572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.305678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.305709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.305843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.305875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.306081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.306115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.306290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.306322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.306563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.306594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.306796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.306828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.307099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.307265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.307434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.307599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.307801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.307980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.308024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.308286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.308325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.308439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.308471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.308685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.308716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.308909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.308941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.309131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.309165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.309292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.309444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.309476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.309757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.309789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.309972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.310132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.310350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.310571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.310709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.310882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.310914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.311184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.311218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.311344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.311376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.311612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.311897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.311928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.312039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.312072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.463 [2024-12-15 06:27:02.312234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.463 qpair failed and we were unable to recover it. 00:36:42.463 [2024-12-15 06:27:02.312357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.312390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.312613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.312836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.312868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.313042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.313192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.313336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.313493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.313713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.313989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.314031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.314210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.314242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.314372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.314403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.314654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.314686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.314920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.314951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.315096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.315129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.315302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.315334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.315519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.315551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.315671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.315945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.315976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.316176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.316209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.316394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.316606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.316639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.316815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.316946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.316978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.317111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.317143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.317318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.317349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.317464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.317497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.317686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.317719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.317890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.317921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.318158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.318326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.318467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.318685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.318823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.318990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.319030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.319214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.319251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.319459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.464 [2024-12-15 06:27:02.319492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.464 qpair failed and we were unable to recover it. 00:36:42.464 [2024-12-15 06:27:02.319711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.319742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.319865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.319896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.320181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.320216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.320455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.320487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.320696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.320728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.320920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.320953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.321156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.321189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.321362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.321395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.321501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.321534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.321717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.321749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.321891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.321923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.322238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.322381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.322632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.322852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.322975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.323013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.323200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.323231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.323424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.323456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.323646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.323677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.323852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.323883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.324088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.324121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.324289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.324321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.324609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.324641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.324758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.324791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.325008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.325140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.325172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.325306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.325338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.325577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.325610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.325850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.325881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.326064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.326097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.326310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.326342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.326540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.326571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.326847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.326879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.327145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.327179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.327362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.465 [2024-12-15 06:27:02.327394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.465 qpair failed and we were unable to recover it. 00:36:42.465 [2024-12-15 06:27:02.327567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.327599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.327774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.327806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.327913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.327944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.328200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.328271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.328471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.328510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.328700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.328734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.328920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.328953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.329181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.329216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.329342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.329373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.329491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.329522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.329763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.329795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.330015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.330050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.330312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.330345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.330550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.330583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.330778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.330810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.331049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.331083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.331266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.331299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.331590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.331623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.331820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.331851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.332148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.332300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.332506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.332538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.332731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.332763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.333055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.333281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.333421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.333588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.333862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.333965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.334004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.334144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.334174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.334364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.334396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.334611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.334641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.334905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.334936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.335125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.335396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.335428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.335610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.335640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.335906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.335937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.336162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.336194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.466 qpair failed and we were unable to recover it. 00:36:42.466 [2024-12-15 06:27:02.336307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.466 [2024-12-15 06:27:02.336337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.336606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.336637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.336813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.336842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.336977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.337151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.337321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.337489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.337939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.337969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.338166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.338197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.338392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.338422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.338547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.338577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.338686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.338717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.339886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.339917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.340089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.340135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.340318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.340350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.340557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.340591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.340708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.340740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.340919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.340951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.341118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.341278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.341642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.341852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.341974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.342915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.342949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.343224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.343258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.467 [2024-12-15 06:27:02.343372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.467 [2024-12-15 06:27:02.343405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.467 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.343593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.343624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.343752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.343784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.343915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.343949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.344087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.344120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.344313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.344345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.344477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.344510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.344753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.344791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.345934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.345965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.346160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.346194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.346388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.346419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.346532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.346565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.346741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.346773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.346876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.346907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.347958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.347990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.348132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.348163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.348343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.348376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.348498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.348530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.348646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.348679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.348795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.348829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.349028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.349062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.349178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.349210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.349418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.349451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.349572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.349605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.468 [2024-12-15 06:27:02.349719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.468 [2024-12-15 06:27:02.349755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.468 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.349981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.350126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.350159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.350400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.350433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.350608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.350642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.350823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.350854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.351049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.351082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.351271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.351304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.351548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.351582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.351847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.351882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.352061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.352096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.352291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.352323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.352502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.352534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.352730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.352770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.353016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.353050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.353157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.353188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.353301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.353333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.353523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.353557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.353664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.353700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.354956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.354989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.355179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.355209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.355347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.355380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.355604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.355868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.355900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.356081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.356117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.356243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.356444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.356477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.356685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.356717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.356833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.356866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.357058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.357093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.357318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.357350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.469 [2024-12-15 06:27:02.357624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.469 [2024-12-15 06:27:02.357658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.469 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.357841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.357873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.358058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.358093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.358295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.358327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.358506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.358541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.358716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.358749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.358927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.358961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.359224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.359527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.359560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.359669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.359704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.359819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.359852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.360076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.360285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.360416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.360448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.360619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.360652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.360905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.360939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.361076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.361111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.361293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.361333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.361455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.361486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.361732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.361764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.361939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.361970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.362390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.362423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.362529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.362562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.362683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.362716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.362822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.362855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.363039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.363072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.363249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.363285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.363415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.363447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.363633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.363667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.363844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.363877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.364034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.364067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.364323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.364356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.364550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.364581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.364826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.470 [2024-12-15 06:27:02.364859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.470 qpair failed and we were unable to recover it. 00:36:42.470 [2024-12-15 06:27:02.365040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.365073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.365258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.365292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.365446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.365621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.365653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.365864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.365896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.366055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.366202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.366347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.366567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.366780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.366977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.367019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.367194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.367227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.367342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.367376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.367477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.367508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.367824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.367857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.368010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.368044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.368240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.368273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.368403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.368435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.368695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.368730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.369001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.369035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.369238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.369272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.369466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.369499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.369682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.369719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.369892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.369924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.370940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.370972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.371099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.371131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.371257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.371291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.371436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.371468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.371651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.371683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.371853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.371885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.372063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.372097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.372212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.372243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.372368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.372401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.471 qpair failed and we were unable to recover it. 00:36:42.471 [2024-12-15 06:27:02.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.471 [2024-12-15 06:27:02.372640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.372834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.372868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.373064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.373098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.373224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.373255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.373389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.373422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.373603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.373637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.373810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.373842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.374026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.374060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.374322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.374352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.374473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.374504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.374694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.374726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.374950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.375956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.375989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.376201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.376233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.376520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.376552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.376687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.376720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.376817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.376849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.376984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.377146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.377376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.377555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.377717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.377872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.377903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.378097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.378132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.378249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.378281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.378488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.378521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.378734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.378922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.378953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.379141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.379175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.379311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.379343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.472 [2024-12-15 06:27:02.379456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.472 [2024-12-15 06:27:02.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.472 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.379604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.379806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.379838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.379955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.379987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.380119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.380151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.380327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.380359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.380542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.380574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.380742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.380774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.380968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.381008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.381207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.381240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.381364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.381395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.381622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.381867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.381899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.382147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.382180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.382289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.382320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.382444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.382475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.382718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.382750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.382940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.382973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.383236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.383269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.383393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.383426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.383532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.383563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.383836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.383868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.384048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.384082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.384261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.384293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.384409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.384440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.384686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.384718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.384883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.384914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.385076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.385229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.385385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.385608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.473 [2024-12-15 06:27:02.385815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.473 qpair failed and we were unable to recover it. 00:36:42.473 [2024-12-15 06:27:02.385990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.386147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.386285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.386502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.386719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.386859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.386890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.387000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.387032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.387290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.387323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.387571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.387603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.387780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.387812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.387936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.387967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.388209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.388388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.388420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.388609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.388641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.388825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.388859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.389065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.389216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.389422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.389568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.389772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.389964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.390185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.390411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.390560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.390799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.390948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.390980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.391095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.391127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.391319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.391350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.391533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.391565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.391692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.391724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.391853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.392008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.392040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.392245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.392454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.392486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.392608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.392639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.392905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.392938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.474 [2024-12-15 06:27:02.393086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.474 [2024-12-15 06:27:02.393119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.474 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.393239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.393276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.393455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.393486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.393745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.393776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.393949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.393981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.394233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.394265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.394499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.394531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.394790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.394821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.395088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.395226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.395540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.395700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.395865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.395988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.396051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.396218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.396251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.396531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.396802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.396834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.396953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.397151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.397275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.397307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.397519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.397635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.397667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.397931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.397963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.398123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.398364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.398396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.398611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.398739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.398771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.399050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.399272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.399438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.399642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.399790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.400038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.400211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.400244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.400506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.400538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.400723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.400755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.401020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.401054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.401231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.401263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.401447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.475 [2024-12-15 06:27:02.401478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.475 qpair failed and we were unable to recover it. 00:36:42.475 [2024-12-15 06:27:02.401588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.401619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.401858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.401891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.402013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.402052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.402226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.402258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.402502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.402535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.402642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.402674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.402912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.402945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.403144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.403178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.403315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.403348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.403609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.403641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.403781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.403816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.403947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.403980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.404207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.404240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.404373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.404593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.404625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.404883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.404915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.405171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.405341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.405499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.405719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.405857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.405972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.406013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.406275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.406308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.406522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.406554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.406672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.406705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.406833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.406864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.407068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.407101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.407278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.407311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.407599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.407633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.407851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.407884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.408022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.408055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.408194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.408226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.408490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.408523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.408658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.408689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.408930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.408963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.409155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.409187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.409377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.409410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.476 [2024-12-15 06:27:02.409543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.476 [2024-12-15 06:27:02.409576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.476 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.409702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.409734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.409847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.409878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.410025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.410059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.410184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.410216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.410336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.410374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.410549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.410789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.410821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.411008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.411040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.411242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.411274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.411401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.411432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.411671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.411704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.411878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.411910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.412039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.412072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.412206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.412238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.412414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.412447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.412650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.412683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.412878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.412911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.413111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.413145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.413270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.413301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.413425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.413457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.413643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.413675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.413888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.413921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.414162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.414200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.414326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.414357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.414554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.414587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.414828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.414861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.414971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.415013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.415228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.415340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.415371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.415570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.415603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.415781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.415814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.416013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.416047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.416221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.416253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.416461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.416494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.416684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.416715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.416837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.477 [2024-12-15 06:27:02.416869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.477 qpair failed and we were unable to recover it. 00:36:42.477 [2024-12-15 06:27:02.417058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.417099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.417287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.417319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.417421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.417452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.417648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.417680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.417807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.417840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.418102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.418136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.418258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.418289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.418528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.418802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.418839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.418957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.419265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.419299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.419488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.419519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.419702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.419733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.419976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.420018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.420173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.420284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.420316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.420560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.420591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.420832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.420864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.421131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.421164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.421347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.421378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.421604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.421843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.421874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.422008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.422042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.422221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.422254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.422491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.422523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.422650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.422683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.422858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.422889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.423126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.423158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.423292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.423324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.423503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.423536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.423711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.423743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.424012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.424045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.424219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.424251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.424500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.424532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.478 [2024-12-15 06:27:02.424640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.478 [2024-12-15 06:27:02.424672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.478 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.424856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.424888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.425094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.425128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.425342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.425374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.425483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.425515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.425754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.425786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.426081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.426282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.426315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.426487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.426519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.426645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.426675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.426940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.426972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.427166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.427198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.427312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.427345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.427474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.427506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.427623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.427660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.427934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.427966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.428144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.428177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.428353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.428385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.428580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.428612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.428751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.428782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.428915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.428948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.429174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.429208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.429338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.429370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.429610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.429642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.429777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.429808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.429990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.430034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.430209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.430242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.430414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.430447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.430697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.430730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.430922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.430954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.431102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.431325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.431356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.431525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.431558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.431812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.431845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.432033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.432067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.432315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.432347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.432465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.432698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.432730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.432847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.479 [2024-12-15 06:27:02.432878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.479 qpair failed and we were unable to recover it. 00:36:42.479 [2024-12-15 06:27:02.433069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.433103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.433272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.433305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.433592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.433663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.433866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.433901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.434092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.434129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.434265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.434297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.434497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.434789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.434822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.434958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.434988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.435187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.435219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.435395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.435427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.435620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.435651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.435788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.435819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.436962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.437095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.437128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.437233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.437522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.437751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.437782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.437965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.438007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.438304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.438336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.438595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.438626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.438829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.438861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.438976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.439295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.439327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.439512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.439549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.439820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.439853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.440061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.440094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.440215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.440247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.440488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.440519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.440696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.440727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.440967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.441134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.441166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.480 [2024-12-15 06:27:02.441346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.480 [2024-12-15 06:27:02.441377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.480 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.441517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.441549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.441668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.441700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.441883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.441913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.442088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.442130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.442397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.442427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.442604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.442766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.442798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.442991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.443034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.443277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.443309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.443594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.443626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.443804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.443835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.444032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.444065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.444252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.444283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.444522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.444554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.444672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.444704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.444824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.444854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.445050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.445355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.445386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.445669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.445706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.445849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.445880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.446958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.446988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.447264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.447294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.447492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.447530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.447790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.447819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.448014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.448045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.448247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.448277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.448469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.448497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.448683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.448713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.448903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.448933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.449156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.449188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.449329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.449358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.481 qpair failed and we were unable to recover it. 00:36:42.481 [2024-12-15 06:27:02.449550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.481 [2024-12-15 06:27:02.449581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.449697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.449727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.449930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.449961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.450154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.450185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.450397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.450428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.450603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.450633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.450748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.450779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.450904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.450935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.451144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.451177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.451416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.451447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.451569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.451601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.451784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.451816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.451964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.452004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.452112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.452142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.452307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.452338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.452448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.452687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.452719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.452984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.453027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.453307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.453338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.453534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.453565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.453746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.453777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.453896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.453928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.454120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.454354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.454389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.454501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.454774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.454810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.454926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.454959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.455211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.455246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.455486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.455621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.455654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.455850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.455883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.456149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.456183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.482 [2024-12-15 06:27:02.456367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.482 [2024-12-15 06:27:02.456401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.482 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.456659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.456692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.456811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.456844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.457029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.457192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.457225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.457496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.457530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.457729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.457763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.457890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.457923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.458087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.458405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.458681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.458842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.458971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.459023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.459271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.459304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.459505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.459538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.459726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.459760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.459967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.460080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.460120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.460332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.460512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.460545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.460669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.460704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.460944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.460977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.461164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.461198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.461407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.461441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.461563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.461596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.461784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.461817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.462045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.462258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.462408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.462619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.462779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.462964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.463376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.463530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.463681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.463854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.463887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.464065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.483 [2024-12-15 06:27:02.464099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.483 qpair failed and we were unable to recover it. 00:36:42.483 [2024-12-15 06:27:02.464353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.464385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.464492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.464526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.464762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.464797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.464967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.465217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.465431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.465594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.465742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.465899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.465932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.466169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.466203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.466388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.466421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.466645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.466678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.466890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.466923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.467120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.467156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.467284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.467317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.467500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.467533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.467712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.467746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.467875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.467908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.468099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.468134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.468359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.468394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.468556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.468737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.468771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.468959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.469001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.469294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.469453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.469486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.469756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.469792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.469966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.470224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.470431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.470584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.470724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.470873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.471053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.471230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.471270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.471408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.471441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.471637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.471787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.472022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.472057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.472231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.484 [2024-12-15 06:27:02.472264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.484 qpair failed and we were unable to recover it. 00:36:42.484 [2024-12-15 06:27:02.472446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.472479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.472674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.472709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.472903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.472936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.473244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.473280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.473466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.473499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.473626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.473659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.473792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.473825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.474008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.474043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.474270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.474343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.474492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.474529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.474803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.474839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.474962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.475009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.475144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.475177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.475367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.475399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.475579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.475612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.475803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.475837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.476014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.476049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.476224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.476256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.476384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.476418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.476549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.476582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.476785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.476819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.477068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.477357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.477391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.477518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.477551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.477727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.477761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.478063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.478288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.478447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.478660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.478866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.478977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.479020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.479231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.479267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.479507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.479540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.479713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.479747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.485 qpair failed and we were unable to recover it. 00:36:42.485 [2024-12-15 06:27:02.479925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.485 [2024-12-15 06:27:02.479958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.480237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.480271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.480482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.480515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.480636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.480670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.480939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.480972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.481098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.481133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.481241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.481274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.481512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.481546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.481735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.481769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.481967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.482012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.482190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.482224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.482349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.482382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.482577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.482610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.482849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.482883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.482931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97c70 (9): Bad file descriptor 00:36:42.486 [2024-12-15 06:27:02.483248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.483320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.483523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.483560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.483687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.483724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.483853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.484113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.484150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.484331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.484363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.484549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.484583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.484764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.484797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.485073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.485109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.485219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.485432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.485465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.485638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.485671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.485852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.485886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.486068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.486207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.486416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.486573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.486721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.486990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.487185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.487218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.487347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.487381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.487565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.487599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.487801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.487834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.488042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.486 [2024-12-15 06:27:02.488078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.486 qpair failed and we were unable to recover it. 00:36:42.486 [2024-12-15 06:27:02.488267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.488300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.488539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.488572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.488753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.488793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.488990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.489040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.489156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.489190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.489380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.489412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.489607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.489640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.489834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.490054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.490088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.490296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.490330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.490613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.490646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.490821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.490855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.491054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.491090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.491267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.491300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.491429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.491463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.491656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.491689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.491906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.491941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.492147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.492182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.492302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.492335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.492549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.492582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.492792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.492825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.493101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.493325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.493487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.493646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.493801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.493987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.494029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.494203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.494237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.494359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.494392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.494527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.494693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.494727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.495005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.495039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.495326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.495360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.487 qpair failed and we were unable to recover it. 00:36:42.487 [2024-12-15 06:27:02.495547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.487 [2024-12-15 06:27:02.495580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.495690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.495724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.495857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.495890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.496133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.496169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.496301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.496334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.496568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.496602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.497043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.497079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.497204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.497237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.497434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.497473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.497727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.497759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.497940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.498254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.498288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.498416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.498450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.498592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.498626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.498753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.498785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.498925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.498960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.499287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.499360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.499504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.499545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.499725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.499759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.499930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.499965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.500164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.500198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.500333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.500365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.500524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.500701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.500733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.500913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.500945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.501202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.501237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.501444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.501477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.501609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.501642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.501776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.501810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.502003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.502037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.502212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.502246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.502428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.502461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.502585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.488 [2024-12-15 06:27:02.502618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.488 qpair failed and we were unable to recover it. 00:36:42.488 [2024-12-15 06:27:02.502809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.502843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.503021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.503055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.503257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.503289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.503474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.503507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.503698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.503732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.503934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.503969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.504176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.504210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.504316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.504348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.504540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.504573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.504751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.504784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.504962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.505257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.505412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.505551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.505701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.505916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.506096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.506131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.506307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.506340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.506528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.506974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.507145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.507313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.507544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.507758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.507917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.507949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.508083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.508116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.508355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.508389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.508566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.508600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.508849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.508883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.509016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.509050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.509230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.509484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.509517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.509696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.509849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.509881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.510067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.510102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.510291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.510325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.510460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.489 [2024-12-15 06:27:02.510493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.489 qpair failed and we were unable to recover it. 00:36:42.489 [2024-12-15 06:27:02.510669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.510889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.510923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.511165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.511201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.511332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.511365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.511557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.511591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.511790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.511822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.512044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.512079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.512185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.512219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.512338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.512370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.512566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.512598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.512840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.512872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.513061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.513095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.513314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.513418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.513451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.513564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.513596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.513768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.513801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.514073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.514318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.514358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.514651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.514685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.514886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.514919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.515024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.515057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.515272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.515305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.515502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.515536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.515779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.515813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.516002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.516035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.516162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.516194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.516432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.516465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.516641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.516675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.516854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.516887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.517077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.517111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.517293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.517326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.517506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.517538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.517752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.517930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.517964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.518117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.518151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.490 [2024-12-15 06:27:02.518331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.490 [2024-12-15 06:27:02.518363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.490 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.518553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.518586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.518722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.518755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.518948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.518982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.519183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.519217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.519391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.519424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.519532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.519564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.519678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.519711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.519892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.519925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.520101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.520175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.520461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.520499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.520632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.520667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.520783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.520817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.521056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.521091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.521210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.521244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.521482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.521517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.521700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.521733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.521907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.521941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.522213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.522248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.522369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.522402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.522520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.522554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.522667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.522700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.522821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.522864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.523045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.523080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.523347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.523380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.523500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.523534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.523713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.523747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.523921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.523954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.524176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.524211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.524387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.524421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.524610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.524643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.524905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.525082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.525117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.525306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.525340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.525580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.525613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.525878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.526070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.526105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.526337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.526371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.491 [2024-12-15 06:27:02.526548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.491 [2024-12-15 06:27:02.526581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.491 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.526823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.526856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.526983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.527025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.527145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.527178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.527422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.527455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.527608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.527642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.527841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.527874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.528060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.528094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.528301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.528335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.528521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.528554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.528727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.528760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.528940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.528974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.529185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.529219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.529459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.529492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.529614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.529647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.529781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.529814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.529923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.529955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.530107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.530142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.530317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.530351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.530541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.530575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.530765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.531040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.531075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.531314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.531347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.531525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.531558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.531797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.531835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.531946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.531978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.532175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.532210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.532427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.532459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.532642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.532675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.532865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.532899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.533079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.533113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.533226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.533260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.533504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.533537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.533730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.492 [2024-12-15 06:27:02.533764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.492 qpair failed and we were unable to recover it. 00:36:42.492 [2024-12-15 06:27:02.533944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.534205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.534334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.534367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.534608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.534746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.534780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.534907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.534941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.535079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.535113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.535354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.535389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.535639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.535673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.535944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.536252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.536286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.536533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.536567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.536958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.537000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.537143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.537304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.537338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.537516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.537549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.537782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.537855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.538032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.538105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.538329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.538366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.538560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.538594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.538788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.538822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.539019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.539055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.539242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.539273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.539397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.539430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.539681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.539713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.539917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.539951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.540105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.540138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.540316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.540349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.540466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.540498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.540739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.540988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.541186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.541220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.541392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.541425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.541664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.541697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.541949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.542078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.542120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.542308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.493 [2024-12-15 06:27:02.542342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.493 qpair failed and we were unable to recover it. 00:36:42.493 [2024-12-15 06:27:02.542628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.542661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.542840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.542873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.543013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.543047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.543219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.543254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.543454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.543487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.543727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.543918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.543951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.544151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.544186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.544457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.544490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.544666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.544699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.544958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.545005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.545217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.545250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.545369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.545403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.545581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.545614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.545889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.545922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.546107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.546142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.546337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.546371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.546544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.546577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.546770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.546804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.546972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.547053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.547255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.547293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.547476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.547511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.547761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.547874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.547906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.548176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.548211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.548349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.548381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.548492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.548526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.548742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.548858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.548890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.549031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.549067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.549172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.549205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.549420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.549454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.549584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.549626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.549843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.549877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.550001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.550037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.550300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.550333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.550444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.550477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.550672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.494 [2024-12-15 06:27:02.550705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.494 qpair failed and we were unable to recover it. 00:36:42.494 [2024-12-15 06:27:02.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.550860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.551044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.551080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.551222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.551255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.551388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.551419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.551663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.551696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.551881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.551915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.552089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.552123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.552330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.552520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.552553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.552737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.552771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.552976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.553187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.553222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.553436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.553470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.553664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.553698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.553881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.553915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.554109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.554143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.554302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.554493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.554527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.554722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.554756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.554947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.554980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.555221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.555256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.555602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.555676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.555881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.555919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.556121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.556157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.556278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.556312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.556436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.556469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.556584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.556617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.556863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.556896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.557041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.557076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.557268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.557302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.557420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.557453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.557705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.557740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.557950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.558089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.558123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.558233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.558266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.558531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.558566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.558747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.558781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.558964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.559018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.495 [2024-12-15 06:27:02.559157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.495 [2024-12-15 06:27:02.559192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.495 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.559315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.559348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.559539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.559573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.559699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.559733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.559917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.559950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.560095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.560398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.560432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.560612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.560645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.560837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.560872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.560981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.561027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.561145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.561185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.561392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.561425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.561692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.561726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.561913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.561946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.562091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.562127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.562254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.562287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.562501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.562534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.562661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.562835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.563025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.563061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.563236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.563271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.563452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.563485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.563740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.563774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.563891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.563924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.564154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.564189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.564369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.564402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.564517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.564551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.564655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.564884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.564917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.565155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.565190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.565307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.565340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.565541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.565574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.565750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.565784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.565922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.565955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.496 [2024-12-15 06:27:02.566193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.496 qpair failed and we were unable to recover it. 00:36:42.496 [2024-12-15 06:27:02.566314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.566346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.566540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.566574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.566761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.566800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.566983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.567164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.567327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.567536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.567674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.567825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.567858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.568037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.568072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.568206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.568239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.568481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.568515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.568704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.568737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.568860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.568893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.569122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.569157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.569332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.497 [2024-12-15 06:27:02.569499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.497 [2024-12-15 06:27:02.569532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.497 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.569784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.569819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.570136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.570283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.570517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.570675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.570830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.570957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.571180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.571347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.571489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.571701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.571870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.571904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.572117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.572300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.572334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.572463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.572621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.572656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.572784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.572817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.573906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.573939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.574216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.574250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.574377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.574411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.574639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.574767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.574800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.574989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.575268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.575422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.575639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.575784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.575928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.575961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.776 qpair failed and we were unable to recover it. 00:36:42.776 [2024-12-15 06:27:02.576216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.776 [2024-12-15 06:27:02.576290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.576578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.576616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.576861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.576896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.577087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.577123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.577365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.577398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.577591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.577626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.577847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.577880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.578023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.578058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.578173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.578208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.578335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.578368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.578495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.578528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.578819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.579090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.579125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.579240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.579274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.579385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.579418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.579643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.579779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.579813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.580091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.580312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.580345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.580596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.580628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.580806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.580840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.581020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.581055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.581173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.581206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.581479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.581513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.581705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.581738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.581877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.582050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.582086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.582208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.582243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.582416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.582696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.582874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.582907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.583824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.583858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.584029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.584066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.777 [2024-12-15 06:27:02.584311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.777 [2024-12-15 06:27:02.584345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.777 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.584551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.584585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.584687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.584721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.584864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.585099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.585134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.585256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.585289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.585474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.585507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.585612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.585645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.585849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.586138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.586398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.586432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.586629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.586662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.586859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.586893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.587170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.587205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.587330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.587363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.587555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.587589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.587709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.587742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.587931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.587965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.588111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.588146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.588323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.588641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.588676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.588802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.588835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.589081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.589117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.589301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.589334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.589459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.589492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.589678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.589711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.589904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.590070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.590104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.590312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.590346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.590466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.590620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.590800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.590834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.591035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.591071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.591184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.591218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.591395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.591429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.591598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.591638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.591825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.591858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.592120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.592155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.592396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.778 [2024-12-15 06:27:02.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.778 qpair failed and we were unable to recover it. 00:36:42.778 [2024-12-15 06:27:02.592668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.592701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.592897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.592930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.593207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.593369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.593403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.593590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.593623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.593866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.593900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.594042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.594078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.594323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.594357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.594536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.594569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.594704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.594737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.594864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.594898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.595107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.595311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.595528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.595851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.595986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.596040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.596236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.596270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.596515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.596549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.596742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.596776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.596961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.597004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.597252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.597286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.597475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.597508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.597697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.597731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.597945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.597979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.598115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.598150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.598323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.598357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.598479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.598512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.598701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.598736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.599003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.599038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.599282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.599316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.599462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.599496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.599739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.599772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.600011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.600046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.600289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.600324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.600435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.600468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.600729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.600769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.600885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.600919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.779 [2024-12-15 06:27:02.601059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.779 [2024-12-15 06:27:02.601094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.779 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.601306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.601450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.601484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.601771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.601803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.601945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.601978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.602159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.602194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.602439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.602472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.602652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.602685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.602907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.603069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.603103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.603288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.603322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.603509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.603543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.603728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.603761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.603938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.603974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.604255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.604289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.604488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.604521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.604746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.604780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.604969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.605013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.605279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.605313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.605497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.605530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.605649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.605683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.605900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.605933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.606058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.606094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.606199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.606233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.606355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.606388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.606516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.606551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.606741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.606775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.607034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.607069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.607274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.607308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.607519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.607553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.607835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.607990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.608032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.608232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.608266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.780 [2024-12-15 06:27:02.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.780 [2024-12-15 06:27:02.608405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.780 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.608583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.608617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.608747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.608781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.609024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.609059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.609182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.609216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.609476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.609515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.609712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.609746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.609983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.610026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.610266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.610300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.610418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.610629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.610663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.610852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.610886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.611128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.611163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.611272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.611306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.611489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.611522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.611645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.611678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.611852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.611885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.612128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.612164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.612310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.612513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.612548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.612723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.612756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.612866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.612899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.613156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.613278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.613312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.613555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.613589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.613789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.613822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.614016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.614051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.614277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.614546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.614579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.614756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.614790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.614976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.615206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.615240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.615376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.615411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.615527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.615561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.615838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.615872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.616135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.616170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.616299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.616333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.616507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.616541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.616675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.616708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.781 qpair failed and we were unable to recover it. 00:36:42.781 [2024-12-15 06:27:02.616831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.781 [2024-12-15 06:27:02.616865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.616987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.617030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.617217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.617250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.617508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.617651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.617686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.617878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.617912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.618151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.618192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.618378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.618585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.618618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.618802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.618836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.618954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.618988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.619177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.619211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.619457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.619628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.619661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.619874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.619908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.620029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.620064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.620256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.620289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.620396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.620430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.620640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.620674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.620984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.621141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.621295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.621573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.621728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.621936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.621970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.622171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.622206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.622475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.622508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.622692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.622726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.622956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.623172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.623207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.623395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.623428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.623688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.623721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.623910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.623943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.624207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.624242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.624447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.624481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.624633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.624803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.624836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.624960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.625003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.625196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.625231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.782 qpair failed and we were unable to recover it. 00:36:42.782 [2024-12-15 06:27:02.625342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.782 [2024-12-15 06:27:02.625375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.625562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.625595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.625823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.625857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.626047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.626351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.626384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.626650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.626684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.626821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.626854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.627039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.627080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.627255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.627290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.627488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.627521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.627765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.627799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.627917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.627951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.628132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.628167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.628364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.628399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.628644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.628677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.628910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.628944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.629056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.629091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.629215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.629248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.629353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.629387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.629652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.629685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.629886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.629919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.630116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.630152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.630341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.630374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.630489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.630522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.630812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.630846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.630976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.631230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.631263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.631514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.631547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.631808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.631842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.632033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.632068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.632256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.632290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.632551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.632585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.632773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.632807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.633012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.633046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.633169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.633386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.633420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.633607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.633640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.633828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.633861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.634054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.783 [2024-12-15 06:27:02.634089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.783 qpair failed and we were unable to recover it. 00:36:42.783 [2024-12-15 06:27:02.634366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.634399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.634508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.634542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.634728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.634762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.634937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.634970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.635110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.635144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.635257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.635291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.635531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.635565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.635704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.635849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.635888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.636011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.636046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.636222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.636255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.636506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.636540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.636751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.636784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.636960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.637134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.637287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.637427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.637705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.637851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.638081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.638117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.638383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.638416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.638538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.638572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.638755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.638789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.639087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.639310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.639466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.639638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.639852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.639984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.640046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.640222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.640256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.640382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.640415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.640537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.640571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.640753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.640787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.640965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.641008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.641273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.641307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.641532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.641830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.641868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.642064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.642103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.642216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.784 [2024-12-15 06:27:02.642249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.784 qpair failed and we were unable to recover it. 00:36:42.784 [2024-12-15 06:27:02.642390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.642424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.642687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.642720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.642904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.642937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.643193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.643227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.643337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.643372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.643560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.643592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.643766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.643800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.644066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.644101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.644238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.644271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.644517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.644550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.644749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.644782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.644909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.644943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.645074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.645109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.645302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.645335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.645460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.645494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.645761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.645793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.645917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.645951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.646184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.646368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.646402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.646593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.646625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.646843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.646877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.647075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.647112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.647356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.647390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.647531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.647572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.647688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.647722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.647838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.647870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.648062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.648097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.648226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.648258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.648522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.648729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.648955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.648989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.649247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.649279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.649459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.649492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.649761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.649794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.649906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.785 [2024-12-15 06:27:02.649940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.785 qpair failed and we were unable to recover it. 00:36:42.785 [2024-12-15 06:27:02.650125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.650160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.650336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.650370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.650620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.650653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.650894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.650928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.651091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.651314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.651501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.651652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.651858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.651990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.652043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.652202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.652238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.655142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.655180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.655448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.655481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.655642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.655675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.655868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.655901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.656097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.656138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.656359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.656393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.656595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.656629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.656801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.656835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.657018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.657054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.657246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.657280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.657542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.657574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.657704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.657738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.657912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.657945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.658133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.658168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.658362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.658396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.658636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.658669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.658791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.658824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.659065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.659100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.659288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.659322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.659517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.659551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.659741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.659775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.659884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.659918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.660163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.660198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.660303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.660336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.660530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.660563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.660689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.660722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.660976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.786 [2024-12-15 06:27:02.661034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.786 qpair failed and we were unable to recover it. 00:36:42.786 [2024-12-15 06:27:02.661205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.661239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.661423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.661457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.661661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.661694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.661971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.662015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.662223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.662262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.662482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.662516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.662789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.662823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.662941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.662975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.663263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.663296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.663511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.663546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.663763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.664093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.664234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.664399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.664559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.664726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.664975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.665136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.665364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.665529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.665673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.665885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.665918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.666102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.666137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.666377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.666411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.666605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.666637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.666838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.666953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.666986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.667177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.667211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.667397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.667430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.667637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.667670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.667901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.667934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.668202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.668237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.668378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.668411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.668603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.668636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.668756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.668790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.668905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.668938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.669209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.669244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.669450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.669483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.669666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.787 [2024-12-15 06:27:02.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.787 qpair failed and we were unable to recover it. 00:36:42.787 [2024-12-15 06:27:02.669879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.669912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.670107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.670143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.670317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.670351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.670468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.670500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.670682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.670717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.670968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.671010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.671188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.671223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.671439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.671473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.671660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.671693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.671871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.671904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.672014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.672050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.672339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.672371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.672549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.672582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.672753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.672787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.673004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.673037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.673279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.673313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.673496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.673529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.673634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.673667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.673904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.673938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.674227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.674261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.674446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.674480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.674670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.674704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.674893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.674926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.675045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.675080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.675207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.675241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.675506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.675538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.675672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.675705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.675845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.675878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.676028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.676063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.676243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.676276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.676483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.676516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.676781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.676814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.677004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.677040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.677295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.677333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.677458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.677492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.677611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.677644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.677814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.677848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.678050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.678086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.678261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.678295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.788 [2024-12-15 06:27:02.678430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.788 [2024-12-15 06:27:02.678464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.788 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.678578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.678611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.678785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.679876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.679910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.680087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.680122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.680318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.680350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.680504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.680622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.680655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.680897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.680930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.681112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.681147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.681281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.681314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.681523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.681558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.681755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.681788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.681910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.681942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.682141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.682177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.682368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.682402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.682647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.682687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.682863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.682896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.683121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.683411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.683665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.683698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.683815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.683980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.684153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.684378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.684587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.684942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.685244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.685277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.685396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.685429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.685705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.789 [2024-12-15 06:27:02.685739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.789 qpair failed and we were unable to recover it. 00:36:42.789 [2024-12-15 06:27:02.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.686047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.686240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.686274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.686391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.686424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.686600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.686635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.686894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.686928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.687117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.687273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.687307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.687499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.687533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.687824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.687856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.688044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.688079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.688187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.688220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.688353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.688656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.688689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.688879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.688913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.689127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.689162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.689274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.689307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.689429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.689638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.689671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.689936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.689969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.690107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.690141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.690317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.690350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.690468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.690501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.690770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.690802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.690929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.690962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.691186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.691293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.691325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.691566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.691639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.691839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.691875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.692065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.692101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.692233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.692266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.692529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.692769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.692803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.692929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.692963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.693161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.693194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.693334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.693368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.693604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.693638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.693813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.693847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.694017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.790 [2024-12-15 06:27:02.694195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.790 [2024-12-15 06:27:02.694229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.790 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.694424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.694466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.694720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.694754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.694936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.694968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.695169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.695203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.695468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.695501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.695703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.695736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.695858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.695892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.696083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.696119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.696309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.696342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.696514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.696547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.696778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.696960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.697001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.697260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.697293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.697479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.697513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.697629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.697661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.697866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.697899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.698024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.698059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.698270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.698302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.698547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.698580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.698701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.698735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.698976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.699017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.699193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.699227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.699424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.699725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.699758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.699885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.699919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.700045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.700080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.700348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.700381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.700663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.700737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.701022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.701061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.701244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.701279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.701408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.701443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.701636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.701669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.701799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.701833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.702056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.702212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.702379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.702532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.702802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.791 qpair failed and we were unable to recover it. 00:36:42.791 [2024-12-15 06:27:02.702975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.791 [2024-12-15 06:27:02.703020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.703284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.703317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.703439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.703481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.703594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.703628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.703819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.703852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.703986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.704028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.704160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.704193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.704326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.704551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.704585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.704768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.704801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.704974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.705014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.705314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.705348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.705615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.705649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.705831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.705865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.706101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.706370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.706404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.706625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.706658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.706841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.706874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.707077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.707293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.707326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.707614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.707649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.707835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.707868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.708133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.708169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.708422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.708456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.708585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.708618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.708760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.708793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.708962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.709182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.709389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.709559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.709767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.709927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.710177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.710211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.710344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.710378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.710572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.710605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.710810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.710843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.711318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.711351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.711530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.711563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.792 qpair failed and we were unable to recover it. 00:36:42.792 [2024-12-15 06:27:02.711701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.792 [2024-12-15 06:27:02.711734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.711922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.711955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.712150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.712183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.712397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.712435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.712660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.712692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.712964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.713132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.713372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.713584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.713751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.713922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.713955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.714226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.714260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.714465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.714498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.714683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.714716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.714908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.714941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.715140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.715173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.715312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.715345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.715528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.715561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.715737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.715770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.715959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.716002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.716272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.716306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.716415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.716687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.716720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.716907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.716940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.717198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.717232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.717345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.717376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.717559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.717593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.717846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.717878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.718063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.718099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.718287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.718321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.718509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.718542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.718660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.718693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.718880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.718914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.719096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.719131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.719337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.719371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.719632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.719665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.719946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.719980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.720182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.720217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.793 qpair failed and we were unable to recover it. 00:36:42.793 [2024-12-15 06:27:02.720468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.793 [2024-12-15 06:27:02.720502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.720636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.720670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.720774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.720807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.720920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.720951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.721168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.721203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.721394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.721434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.721609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.721828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.721860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.721976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.722019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.722215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.722249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.722424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.722457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.722665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.722699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.722981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.723037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.723178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.723212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.723407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.723441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.723619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.723652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.723890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.723922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.724036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.724070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.724195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.724228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.724408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.724440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.724707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.724741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.724928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.724959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.725096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.725130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.725329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.725362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.725491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.725525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.725715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.725749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.725933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.725966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.726086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.726119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.726328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.726511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.726545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.726679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.726713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.726899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.726931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.727134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.794 [2024-12-15 06:27:02.727175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.794 qpair failed and we were unable to recover it. 00:36:42.794 [2024-12-15 06:27:02.727349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.727383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.727572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.727605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.727711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.727743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.727936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.727969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.728224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.728257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.728454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.728488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.728666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.728700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.728816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.728849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.729032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.729067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.729262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.729296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.729488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.729521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.729642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.729676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.729938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.729971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.730151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.730310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.730454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.730624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.730782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.730960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.731192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.731335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.731487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.731629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.731836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.731868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.732077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.732113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.732300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.732333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.732481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.732513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.732722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.732755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.732930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.732963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.733191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.733333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.733495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.733528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.733777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.733959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.734003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.734133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.734166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.734298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.734444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.734477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.734646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.795 [2024-12-15 06:27:02.734680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.795 qpair failed and we were unable to recover it. 00:36:42.795 [2024-12-15 06:27:02.734857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.734892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.735022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.735069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.735251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.735285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.735477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.735509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.735749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.735785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.735908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.735942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.736189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.736225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.736414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.736447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.736620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.736653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.736823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.736857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.737272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.737427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.737593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.737755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.737908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.737940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.738231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.738266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.738387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.738419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.738521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.738554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.738682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.738716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.738886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.738919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.739026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.739061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.739246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.739280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.739498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.739678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.739710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.739886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.739918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.740033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.740337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.740370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.740495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.740529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.740772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.740805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.740963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.741206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.741279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.741424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.741462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.741599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.741634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.741825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.796 [2024-12-15 06:27:02.741859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.796 qpair failed and we were unable to recover it. 00:36:42.796 [2024-12-15 06:27:02.741969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.742129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.742290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.742499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.742646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.742948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.742982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.743186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.743230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.743420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.743454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.743565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.743600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.743825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.744056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.744272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.744481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.744638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.744852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.744974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.745157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.745304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.745513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.745728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.745946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.745980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.746184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.746220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.746476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.746510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.746628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.746662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.746933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.746966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.747095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.747129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.747251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.747286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.747405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.747439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.747619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.747660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.747813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.747848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.748060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.748096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.748218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.748252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.748366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.748400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.748591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.748626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.748844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.748877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.749011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.749046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.749220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.749253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.749474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.797 [2024-12-15 06:27:02.749508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.797 qpair failed and we were unable to recover it. 00:36:42.797 [2024-12-15 06:27:02.749630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.749665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.749840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.749874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.750006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.750042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.750213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.750248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.750450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.750484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.750728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.750762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.750930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.750963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.751213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.751250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.751474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.751666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.751700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.752019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.752147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.752181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.752453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.752569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.752603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.752796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.752830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.753072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.753108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.753367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.753401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.753612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.753648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.753823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.753856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.754002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.754036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.754232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.754268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.754445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.754479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.754678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.754714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.754911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.754945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.755198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.755233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.755443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.755477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.755724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.755758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.755876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.755911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.756040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.756090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.756220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.756255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.798 [2024-12-15 06:27:02.756430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.798 [2024-12-15 06:27:02.756464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.798 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.756643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.756677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.756875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.756908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.757090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.757131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.757320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.757355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.757549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.757583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.757768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.757802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.757930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.757965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.758097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.758131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.758375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.758411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.758677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.758713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.758898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.758932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.759119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.759154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.759288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.759323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.759429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.759464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.759641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.759676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.759817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.759852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.760045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.760081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.760220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.760259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.760479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.760603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.760637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.760824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.760859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.761066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.761102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.761292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.761326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.761602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.761636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.761829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.761863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.762118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.762153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.762271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.762306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.762447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.762482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.762591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.762625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.762866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.762901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.763009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.763044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.763228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.763262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.763514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.763548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.763773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.763949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.763982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.764237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.799 [2024-12-15 06:27:02.764273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.799 qpair failed and we were unable to recover it. 00:36:42.799 [2024-12-15 06:27:02.764380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.764413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.764523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.764557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.764678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.764712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.764898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.764932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.765078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.765114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.765229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.765263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.765454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.765491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.765602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.765635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.765860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.765895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.766073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.766108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.766302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.766336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.766451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.766484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.766664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.766697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.766989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.767433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.767595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.767758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.767904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.767938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.768121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.768156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.768363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.768538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.768577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.768696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.768837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.768872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.769118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.769153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.769276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.769310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.769501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.769536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.769716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.769750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.770027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.770062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.770182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.770508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.770542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.770735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.770768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.770979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.771135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.771314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.771481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.771702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.771864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.771897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.772014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.800 [2024-12-15 06:27:02.772048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.800 qpair failed and we were unable to recover it. 00:36:42.800 [2024-12-15 06:27:02.772222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.772255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.772435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.772468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.772581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.772615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.772821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.772855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.773059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.773094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.773211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.773245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.773491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.773524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.773649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.773682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.773868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.773902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.774074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.774323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.774539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.774686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.774837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.774964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.775005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.775117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.775151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.775345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.775378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.775621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.775826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.775860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.776102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.776365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.776398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.776576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.776610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.776804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.776843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.776976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.777017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.777195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.777229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.777405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.777439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.777615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.777650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.777826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.777859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.778058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.778094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.778311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.778518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.778618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.778652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.778775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.778809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.779073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.779108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.779317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.779351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.779571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.779759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.801 [2024-12-15 06:27:02.779794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.801 qpair failed and we were unable to recover it. 00:36:42.801 [2024-12-15 06:27:02.779969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.780013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.780202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.780236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.780368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.780541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.780575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.780785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.780818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.781059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.781094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.781282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.781316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.781490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.781524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.781646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.781679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.781872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.781906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.782096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.782131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.782262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.782295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.782507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.782542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.782734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.782767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.782953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.782986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.783181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.783214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.783402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.783436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.783616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.783649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.783755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.783788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.783907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.783940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.784902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.785148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.785184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.785349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.785535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.785569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.785684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.785719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.785984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.786214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.786422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.786627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.786780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.786953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.786988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.787174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.787208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.787332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.802 [2024-12-15 06:27:02.787366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.802 qpair failed and we were unable to recover it. 00:36:42.802 [2024-12-15 06:27:02.787538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.787571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.787685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.787719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.787962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.788004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.788297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.788331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.788520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.788553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.788672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.788706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.788967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.789011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.789144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.789178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.789309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.789343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.789607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.789641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.789830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.789863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.790072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.790107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.790286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.790320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.790540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.790573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.790797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.790869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.791197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.791357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.791391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.791511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.791545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.791726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.791759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.792018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.792054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.792242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.792275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.792594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.792722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.792754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.792952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.793129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.793161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.793433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.793465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.793635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.793825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.793867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.794091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.794280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.794313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.794506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.794540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.794726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.794759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.803 qpair failed and we were unable to recover it. 00:36:42.803 [2024-12-15 06:27:02.794870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.803 [2024-12-15 06:27:02.794903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.795184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.795406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.795554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.795714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.795864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.795981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.796147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.796305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.796472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.796781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.796813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.797079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.797113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.797243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.797277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.797398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.797430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.797612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.797646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.797838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.797870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.798065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.798100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.798287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.798320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.798539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.798573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.798689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.798724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.798848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.798880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.799102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.799138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.799325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.799358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.799542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.799575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.799767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.799800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.799978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.800203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.800420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.800573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.800797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.800951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.800985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.801186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.801220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.801414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.801447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.801646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.801680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.801858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.801897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.802092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.802127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.804 qpair failed and we were unable to recover it. 00:36:42.804 [2024-12-15 06:27:02.802241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.804 [2024-12-15 06:27:02.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.802398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.802430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.802549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.802582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.802775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.802808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.802988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.803206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.803351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.803663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.803939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.803973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.804096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.804134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.804251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.804281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.804404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.804436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.804612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.804647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.804837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.804869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.805048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.805081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.805322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.805356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.805561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.805595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.805728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.805762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.805939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.805973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.806269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.806301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.806437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.806471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.806646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.806679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.806875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.806907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.807090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.807124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.807350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.807424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.807696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.807734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.807851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.807887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.808083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.808122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.808298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.808332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.808465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.808499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.808684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.808718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.808837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.808870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.809063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.809098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.809226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.809260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.809461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.809495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.809710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.809744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.805 [2024-12-15 06:27:02.809854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.805 [2024-12-15 06:27:02.809889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.805 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.810080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.810125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.810320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.810354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.810547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.810581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.810778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.810811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.810925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.811179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.811214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.811357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.811391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.811589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.811622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.811747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.811780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.812058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.812301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.812460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.812622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.812835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.812971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.813015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.813141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.813174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.813369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.813403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.813589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.813871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.813905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.814913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.814947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.815065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.815278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.815312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.815495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.815528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.815704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.815737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.815964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.816093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.816129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.816320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.816528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.816562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.816804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.816838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.816961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.817006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.817201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.817234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.817408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.806 [2024-12-15 06:27:02.817442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.806 qpair failed and we were unable to recover it. 00:36:42.806 [2024-12-15 06:27:02.817632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.817665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.817783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.817817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.817990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.818035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.818153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.818192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.818447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.818482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.818601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.818635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.818818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.818852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.819069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.819105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.819315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.819556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.819589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.819795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.819830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.820086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.820121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.820245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.820278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.820400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.820433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.820569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.820602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.820808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.820841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.821036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.821071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.821194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.821227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.821450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.821483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.821675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.821842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.821875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.822053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.822088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.822210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.822243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.822511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.822545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.822786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.823084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.823120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.823306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.823339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.823532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.823564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.823690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.823847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.823881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.824116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.824190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.824392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.824431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.824624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.824659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.824850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.824885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.825010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.825047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.825182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.825218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.807 [2024-12-15 06:27:02.825395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.807 [2024-12-15 06:27:02.825429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.807 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.825550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.825583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.825703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.825737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.825957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.826203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.826345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.826758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.826933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.826967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.827168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.827202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.827383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.827417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.827533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.827566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.827689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.827724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.827911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.827945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.828093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.828130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.828307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.828341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.828496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.828529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.828654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.828687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.828905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.828940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.829087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.829122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.829326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.829360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.829467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.829508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.829648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.829682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.829893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.830782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.830817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.831085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.831121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.831238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.831273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.831523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.831558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.831683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.831716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.831848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.808 [2024-12-15 06:27:02.831882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.808 qpair failed and we were unable to recover it. 00:36:42.808 [2024-12-15 06:27:02.832016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.832284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.832425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.832592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.832742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.832956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.832990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.833246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.833281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.833529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.833773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.834007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.834042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.834165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.834199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.834376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.834410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.834654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.834688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.834826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.834867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.835055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.835090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.835264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.835299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.835483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.835517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.835690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.835723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.835827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.835862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.836066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.836102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.836283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.836318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.836573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.836608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.836785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.836819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.836963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.837006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.837193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.837228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.837495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.837529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.837638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.837672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.837919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.837955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.838085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.838121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.838398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.838575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.838609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.838786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.838821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.839027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.839064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.839183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.839217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.839410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.839445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.839553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.839588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.809 [2024-12-15 06:27:02.839718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.809 [2024-12-15 06:27:02.839752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.809 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.840842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.841048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.841084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.841211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.841245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.841432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.841465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.841728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.841762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.841953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.841986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.842172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.842205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.842330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.842364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.842606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.842639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.842779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.842814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.842990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.843036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.843222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.843256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.843450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.843485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.843689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.843723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.843850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.843883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.844079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.844117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.844308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.844343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.844523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.844556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.844692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.844727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.844851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.844887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.845131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.845166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.845352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.845387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.845587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.845621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.845812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.845847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.846042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.846084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.846278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.846312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.846440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.846474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.846605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.846639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.846777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.846810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.847004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.847041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.847220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.847255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.847435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.847468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.847664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.847698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.810 [2024-12-15 06:27:02.847815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.810 [2024-12-15 06:27:02.847849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.810 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.847961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.848117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.848327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.848485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.848913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.848953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.849149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.849185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.849374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.849407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.849611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.849645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.849792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.850045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.850080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.850271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.850305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.850422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.850454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.850634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.850667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.850851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.850885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.851012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.851048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.851174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.851208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.851404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.851438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.851617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.851651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.851798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.851832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.852021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.852056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.852230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.852264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.852453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.852487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.852601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.852634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.852845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.852878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.853030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.853065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.853319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.853353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.853525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.853559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.853741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.853776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.853883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.853917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.854052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.854088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.854284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.854421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.854459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.854698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.854732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.811 [2024-12-15 06:27:02.854852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.811 [2024-12-15 06:27:02.854887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.811 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.855129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.855164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.855286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.855319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.855431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.855465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.855667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.855701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.855829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.855863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.856165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.856288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.856322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.856497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.856530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.856650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.856684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.856872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.856906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.857121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.857156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.857289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.857323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.857495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.857528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.857669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.857704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.857811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.857845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.858088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.858124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.858253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.858287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.858504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.858538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.858697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.858731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.858844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.858878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.859060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.859095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.859274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.859308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.859482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.859516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.859760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.859795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.860046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.860106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.860306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.860341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.860564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.860635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.860852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.860889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.861134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.861172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.861307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.861339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.861524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.861558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.861738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.812 [2024-12-15 06:27:02.861771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.812 qpair failed and we were unable to recover it. 00:36:42.812 [2024-12-15 06:27:02.862017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.862052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.862255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.862289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.862470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.862612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.862644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.862819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.862852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.863062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.863097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.863389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.863424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.863561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.863595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.863766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.863800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.863990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.864049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.864227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.864259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.864453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.864485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.864667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.864701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.864881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.864914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.865087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.865122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.865298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.865532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.865564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.865749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.865782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.865901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.865935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.866132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.866174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.866299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.866521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.866552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.866787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.866822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.867024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.867060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.867172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.867205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.867474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.867507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.867663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.867866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.867898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.868078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.868114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.868256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.868290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.868478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.868512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.868692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.868858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.868890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.869139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.869173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.869298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.869330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.869518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.813 [2024-12-15 06:27:02.869553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.813 qpair failed and we were unable to recover it. 00:36:42.813 [2024-12-15 06:27:02.869666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.869697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.869818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.869850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.870027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.870254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.870287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.870472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.870504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.870629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.870841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.870875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.871065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.871100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.871284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.871316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.871509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.871554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.871690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.871723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.871855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.871888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.872933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.872965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.873110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.873145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.873318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.873352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.873477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.873510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.873651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.873684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.873862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.873897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.874074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.874113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.874293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.874326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.874572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.874604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.874710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.874742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.874879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.874913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.875115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.875149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.875263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.875295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.875476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.875509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.875637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.875670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.875932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.875966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.876240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.876273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.876378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.876410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.876583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.876616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.876807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.876840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.814 [2024-12-15 06:27:02.877097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.814 [2024-12-15 06:27:02.877132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.814 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.877304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.877337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.877452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.877486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.877718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.877750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.877871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.877904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.878095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.878129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.878368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.878401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.878517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.878551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.878762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.878794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.878967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.879021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.879266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.879298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.879431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.879463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.879654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.879687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.879962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.880006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.880196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.880230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.880499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.880531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.880704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.880737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.880909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.880943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.881154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.881187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.881433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.881466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.881637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.881669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.881785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.881817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.882014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.882048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.882223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.882256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.882428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.882461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.882724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.882757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.883042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.883330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.883363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.883494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.883527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.883653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.883685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.883810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.883841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.884020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.884053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.884296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.884329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.884467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.884501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.884688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.884720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.884986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.885029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.885219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.885251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.885383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.885415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.885657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.885690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.885809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.885843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.886033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.886068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.886279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.886312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.886486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.886519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.886645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.886678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.886869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.886902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.887076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.887111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.887290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.887323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.887500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.887534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.887655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.887689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.815 qpair failed and we were unable to recover it. 00:36:42.815 [2024-12-15 06:27:02.887885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.815 [2024-12-15 06:27:02.887918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.888198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.888233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.888428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.888462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.888670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.888702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.888896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.888931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.889179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.889214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.889462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.889688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.889724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.889933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.889964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.890100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.890132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.890275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.890309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.890592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.890625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.890820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.890852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.891045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.891176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.891208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.891349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.891382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.891488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.891520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.891762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.891802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.891987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.892033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.892242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.892275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.892378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.892411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.892617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.892651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.892971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:42.816 [2024-12-15 06:27:02.893099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.816 [2024-12-15 06:27:02.893133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:42.816 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.893370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.893477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.893510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.893686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.893855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.893888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.894078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.894113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.894371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.894403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.894591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.894624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.894871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.894904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.895097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.895322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.895476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.895509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.895706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.895739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.896026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.896062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.896189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.896413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.896445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.896624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.896845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.896877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.897086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.897121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.897298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.897331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.897516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.897549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.897678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.897712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.897976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.898019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.098 qpair failed and we were unable to recover it. 00:36:43.098 [2024-12-15 06:27:02.898131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.098 [2024-12-15 06:27:02.898164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.898360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.898392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.898604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.898638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.898762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.898795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.898974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.899020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.899145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.899179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.899370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.899403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.899537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.899571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.899757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.899790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.899965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.900028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.900167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.900199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.900486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.900526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.900766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.900800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.901064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.901099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.901221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.901253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.901521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.901555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.901813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.901845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.901977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.902017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.902214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.902247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.902367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.902398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.902579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.902612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.902851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.902883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.903074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.903108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.903216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.903249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.903401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.903527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.903561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.903803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.904024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.904058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.904261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.904294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.904402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.904617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.904651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.904834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.904867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.905185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.905362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.905395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.905507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.905662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.905694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.905962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.906003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.906198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.906230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.906416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.099 [2024-12-15 06:27:02.906449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.099 qpair failed and we were unable to recover it. 00:36:43.099 [2024-12-15 06:27:02.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.906588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.906771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.906803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.907044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.907077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.907314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.907346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.907522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.907555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.907763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.907796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.907982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.908040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.908301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.908334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.908614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.908648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.908772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.908804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.908980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.909025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.909146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.909179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.909360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.909399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.909663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.909696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.909873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.909907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.910029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.910062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.910303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.910336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.910524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.910557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.910738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.910770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.910942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.910974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.911915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.911947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.912164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.912199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.912382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.912415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.912588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.912621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.912867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.912901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.913142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.913177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.913298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.913330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.913503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.913535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.913777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.913810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.913917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.913950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.914078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.914111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.914221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.914254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.914406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.914582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.100 [2024-12-15 06:27:02.914615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.100 qpair failed and we were unable to recover it. 00:36:43.100 [2024-12-15 06:27:02.914791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.914824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.915115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.915148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.915318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.915351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.915564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.915596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.915731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.915764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.915967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.916019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.916260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.916293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.916464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.916495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.916695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.916726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.916850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.916883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.917060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.917094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.917284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.917316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.917557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.917591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.917702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.917947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.917979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.918227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.918261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.918366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.918398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.918581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.918613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.918880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.919104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.919139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.919249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.919281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.919462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.919495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.919608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.919641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.919927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.919960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.920209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.920243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.920432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.920465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.920568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.920601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.920834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.920867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.921086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.921303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.921439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.921653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.921876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.921981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.922020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.922192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.922226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.922360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.922394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.922574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.922607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.922774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.922807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.101 [2024-12-15 06:27:02.923049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.101 [2024-12-15 06:27:02.923083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.101 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.923269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.923302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.923446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.923479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.923743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.923776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.923891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.923922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.924108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.924142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.924252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.924283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.924471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.924505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.924618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.924651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.924837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.924870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.925013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.925046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.925285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.925317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.925445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.925476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.925720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.925752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.925938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.925971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.926157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.926196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.926444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.926477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.926654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.926686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.926864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.926895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.927106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.927139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.927329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.927600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.927632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.927829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.927862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.928002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.928037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.928144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.928175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.928504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.928686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.928718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.929000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.929034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.929248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.929282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.929465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.929497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.929683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.929714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.929924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.929956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.930169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.930203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.930388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.102 [2024-12-15 06:27:02.930422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.102 qpair failed and we were unable to recover it. 00:36:43.102 [2024-12-15 06:27:02.930545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.930578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.930765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.930797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.930935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.930968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.931156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.931190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.931362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.931570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.931604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.931725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.931757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.931937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.931971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.932191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.932226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.932409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.932443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.932616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.932647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.932849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.932882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.933081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.933116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.933242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.933273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.933536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.933568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.933756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.933789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.933908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.933941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.934175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.934208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.934397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.934429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.934631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.934665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.934933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.934966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.935164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.935205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.935443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.935477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.935631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.935869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.935902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.936078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.936113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.936305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.936337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.936458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.936491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.936670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.936703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.936894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.936927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.937098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.937131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.937309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.937342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.937451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.937483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.937662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.937695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.937867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.938093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.938127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.938248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.938280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.938478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.938511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.103 [2024-12-15 06:27:02.938697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.103 [2024-12-15 06:27:02.938730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.103 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.938859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.938893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.939089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.939123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.939339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.939371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.939548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.939580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.939712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.939745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.940034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.940070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.940337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.940369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.940651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.940684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.940889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.940922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.941146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.941180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.941375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.941408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.941606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.941639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.941911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.941943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.942155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.942189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.942370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.942402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.942512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.942545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.942681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.942873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.942906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.943146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.943181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.943368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.943400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.943603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.943806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.943840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.944025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.944065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.944301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.944333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.944463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.944495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.944619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.944653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.944844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.944876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.945056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.945091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.945299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.945331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.945437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.945469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.945727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.945761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.945879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.945912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.946029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.946062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.946267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.946384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.946416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.946668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.946915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.946947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.947067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.947101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.104 [2024-12-15 06:27:02.947234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.104 [2024-12-15 06:27:02.947267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.104 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.947383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.947539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.947571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.947832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.947864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.947981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.948024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.948170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.948202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.948325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.948357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.948546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.948579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.948796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.948829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.949095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.949129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.949246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.949277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.949520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.949552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.949733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.949766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.950030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.950203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.950484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.950518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.950710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.950743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.950926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.950960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.951166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.951200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.951438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.951471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.951644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.951677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.951794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.952071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.952105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.952298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.952330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.952451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.952490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.952618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.952649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.952893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.952925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.953125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.953160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.953400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.953431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.953599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.953631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.953835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.953867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.954083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.954117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.954292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.954325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.954514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.954547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.954662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.954694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.954862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.954895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.955097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.955132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.955323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.955356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.955477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.105 [2024-12-15 06:27:02.955511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.105 qpair failed and we were unable to recover it. 00:36:43.105 [2024-12-15 06:27:02.955698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.955729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.955940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.955972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.956161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.956193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.956377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.956409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.956632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.956665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.956855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.956888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.957003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.957038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.957217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.957250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.957448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.957480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.957721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.957754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.957881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.957914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.958048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.958083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.958367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.958401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.958595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.958629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.958818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.958851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.958981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.959134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.959452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.959876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.959910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.960083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.960117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.960298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.960330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.960468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.960502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.960738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.960770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.960946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.960985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.961111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.961144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.961322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.961561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.961595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.961726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.961759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.961879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.961911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.962088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.962121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.962322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.962356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.962530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.962564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.962671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.962703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.962987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.963029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.963271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.963303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.963483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.963517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.963634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.106 [2024-12-15 06:27:02.963666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.106 qpair failed and we were unable to recover it. 00:36:43.106 [2024-12-15 06:27:02.963800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.963833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.963953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.964289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.964325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.964460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.964492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.964598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.964631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.964795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.965958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.965990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.966182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.966215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.966607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.966902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.966939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.967158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.967195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.967374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.967407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.967648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.967684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.967815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.967848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.968036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.968071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.968359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.968393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.968650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.968684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.968806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.968840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.969015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.969051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.969170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.969202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.969339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.969373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.969491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.969523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.969717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.969752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.970014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.970050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.970176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.970210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.970403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.970438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.970716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.970750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.970921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.970955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.971154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.971188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.971459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.107 [2024-12-15 06:27:02.971493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.107 qpair failed and we were unable to recover it. 00:36:43.107 [2024-12-15 06:27:02.971626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.971660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.971873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.971906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.972039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.972075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.972199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.972233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.972409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.972444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.972724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.972763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.972956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.973185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.973338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.973483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.973706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.973874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.973908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.974118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.974331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.974502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.974865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.974998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.975231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.975385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.975535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.975777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.975943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.975977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.976230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.976265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.976444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.976478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.976655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.976689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.976928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.976962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.977099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.977134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.977243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.977276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.977495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.977528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.977793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.977827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.978014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.978049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.978242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.978282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.978542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.978575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.978762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.978796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.978915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.978949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.979098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.979135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.979257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.979290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.979473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.108 [2024-12-15 06:27:02.979506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.108 qpair failed and we were unable to recover it. 00:36:43.108 [2024-12-15 06:27:02.979746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.979780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.979907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.979942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.980102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.980284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.980501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.980641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.980806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.980984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.981029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.981248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.981284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.981400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.981434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.981551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.981585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.981716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.981750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.982014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.982051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.982179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.982213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.982401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.982435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.982694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.982727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.982906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.982940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.983133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.983169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.983352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.983385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.983496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.983530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.983653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.983688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.983880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.983915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.984041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.984077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.984197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.984231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.984416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.984449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.984725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.984924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.984957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.985154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.985189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.985401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.985434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.985616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.985650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.985778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.986049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.986083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.986207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.986241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.986351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.986385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.986622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.987023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.987063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.987256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.987291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.987469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.987503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.987681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.109 [2024-12-15 06:27:02.987715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.109 qpair failed and we were unable to recover it. 00:36:43.109 [2024-12-15 06:27:02.987912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.987946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.988159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.988195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.988401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.988562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.988596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.988859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.988893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.989010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.989046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.989220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.989488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.989522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.989654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.989697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.989885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.989919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.990103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.990139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.990271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.990305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.990490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.990725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.990758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.991938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.991972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.992184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.992220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.992336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.992370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.992589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.992623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.992747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.992780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.992908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.992942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.993069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.993104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.993305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.993339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.993454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.993496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.993685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.993719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.993896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.993929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.994236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.994271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.994578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.994753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.994787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.994964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.995008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.995185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.995219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.995535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.995607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.995746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.995784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.996031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.996067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.996327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.110 [2024-12-15 06:27:02.996361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.110 qpair failed and we were unable to recover it. 00:36:43.110 [2024-12-15 06:27:02.996556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.996590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.996714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.996747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.996983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.997031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.997149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.997182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.997366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.997399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.997573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.997606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.997778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.997811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.997983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.998027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.998305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.998338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.998453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.998494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.998611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.998643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.998850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.998883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.999074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.999109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.999225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.999256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.999440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.999472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.999659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.999692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:02.999890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:02.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.000055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.000091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.000276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.000309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.000422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.000454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.000595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.000840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.001005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.001039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.001312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.001346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.001476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.001508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.001722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.001841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.001873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.002088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.002256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.002486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.002623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.002779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.002988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.003031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.003273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.003308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.003498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.111 [2024-12-15 06:27:03.003532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.111 qpair failed and we were unable to recover it. 00:36:43.111 [2024-12-15 06:27:03.003649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.003681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.003808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.003841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.004018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.004053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.004172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.004205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.004317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.004348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.004527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.004561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.004738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.004770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.005017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.005052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.005272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.005305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.005483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.005516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.005762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.005794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.005916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.005948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.006139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.006174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.006437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.006472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.006670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.006709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.006833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.006866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.006985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.007213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.007360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.007589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.007745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.007896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.008107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.008143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.008261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.008291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.008412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.008446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.008645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.008679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.008852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.008884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.009072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.009107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.009223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.009257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.009453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.009487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.009617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.009649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.009854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.009886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.010001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.010034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.010209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.010243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.010445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.010478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.010609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.010643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.010832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.010866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.011110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.011147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.112 [2024-12-15 06:27:03.011274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.112 [2024-12-15 06:27:03.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.112 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.011481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.011515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.011653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.011686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.011818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.011852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.012030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.012066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.012278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.012312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.012513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.012547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.012677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.012711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.012852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.012886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.013918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.014053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.014090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.014221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.014254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.014457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.014491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.014686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.014719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.014976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.015024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.015221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.015255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.015441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.015476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.015676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.015710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.015822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.015854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.015986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.016028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.016166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.016200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.016395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.016427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.016556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.016591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.016779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.017946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.017981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.018184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.018217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.018333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.018366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.018487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.018522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.018641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.018674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.018859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.113 [2024-12-15 06:27:03.018893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.113 qpair failed and we were unable to recover it. 00:36:43.113 [2024-12-15 06:27:03.019073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.019240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.019274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.019383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.019417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.019595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.019635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.019764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.019797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.019979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.020832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.020974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.021425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.021591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.021736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.021950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.021984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.022121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.022157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.022363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.022511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.022545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.022670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.022705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.022827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.022859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.023050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.023085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.023196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.023231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.023408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.023443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.023687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.023721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.023909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.023943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.024109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.024344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.024497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.024644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.024974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.025015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.025222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.025434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.025467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.025643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.025675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.025878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.025912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.026051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.026086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.114 [2024-12-15 06:27:03.026204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.114 [2024-12-15 06:27:03.026237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.114 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.026345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.026378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.026495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.026529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.026707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.026741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.026877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.026910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.027092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.027132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.027309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.027342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.027530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.027562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.027684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.027718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.027957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.027989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.028222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.028333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.028367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.028486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.028519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.028744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.028777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.028886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.029046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.029079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.029256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.029289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.029448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.029649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.029683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.029905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.030077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.030112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.030284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.030318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.030495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.030529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.030723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.030756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.030863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.030897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.031071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.031110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.031219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.031253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.031388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.031421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.031597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.031631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.031837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.031871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.032897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.033092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.033127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.033234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.033267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.033370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.033403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.115 [2024-12-15 06:27:03.033523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.115 [2024-12-15 06:27:03.033556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.115 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.033746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.033780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.033967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.034859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.034962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.035008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.035227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.035262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.035392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.035426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.035596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.035630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.035826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.035860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.036065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.036270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.036451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.036484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.036668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.036700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.036885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.036917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.037099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.037134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.037311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.037345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.037474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.037507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.037770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.037804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.037910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.037944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.038192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.038225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.038398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.038431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.038535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.038568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.038692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.038725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.038839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.038874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.039052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.039086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.039347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.039380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.039490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.039524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.039631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.039664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.039938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.039971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.040169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.040389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.040423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.116 [2024-12-15 06:27:03.040536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.116 [2024-12-15 06:27:03.040569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.116 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.040672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.040705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.040882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.041108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.041143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.041257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.041289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.041528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.041563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.041713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.041746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.041921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.041953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.042123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.042160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.042406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.042440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.042572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.042606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.042717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.042756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.042877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.042910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.043028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.043062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.043305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.043338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.043444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.043476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.043669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.043702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.043830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.044053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.044270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.044311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.044434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.044468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.044651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.044684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.044797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.044830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.045940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.045973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.046092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.046125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.046368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.046402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.046594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.046627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.046730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.046763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.046940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.046973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.047240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.047274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.047393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.047427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.048835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.048891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.049105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.117 [2024-12-15 06:27:03.049144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.117 qpair failed and we were unable to recover it. 00:36:43.117 [2024-12-15 06:27:03.049348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.049574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.049608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.049741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.049777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.049912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.049947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.050072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.050106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.050226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.050431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.050464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.050730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.050764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.050872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.050904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.051153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.051188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.051360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.051393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.051519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.051552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.051675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.051716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.051841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.051874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.052060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.052095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.052220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.052252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.052451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.052485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.052660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.052695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.052952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.052985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.053213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.053248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.053490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.053522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.053734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.053768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.053944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.053977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.054110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.054145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.054260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.054293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.054479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.054512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.054636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.054670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.054857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.054891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.055102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.055137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.055256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.055290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.055404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.055437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.055615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.055892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.055925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.056055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.056089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.056263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.056296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.056425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.056458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.056573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.056607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.056841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.056875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.118 [2024-12-15 06:27:03.057087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.118 qpair failed and we were unable to recover it. 00:36:43.118 [2024-12-15 06:27:03.057282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.057316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.057419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.057453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.057571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.057605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.057784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.057818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.057931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.057964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.058145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.058179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.058307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.058340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.058446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.058479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.058598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.058636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.058841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.058875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.059071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.059106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.059299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.059333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.059454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.059487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.059753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.059792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.059901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.059933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.060084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.060344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.060378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.060500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.060533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.060736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.060770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.060944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.060978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.061111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.061146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.061269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.061302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.061505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.061538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.061644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.061678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.061870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.061904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.062026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.062061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.062178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.062212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.062340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.062374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.062494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.062528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.062696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.062730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.064578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.064637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.064862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.064898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.065168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.065204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.065339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.065373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.065515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.065548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.065732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.065765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.065943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.065976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.066128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.066161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.119 [2024-12-15 06:27:03.066365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.119 [2024-12-15 06:27:03.066397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.119 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.066562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.066690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.066724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.066983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.067028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.067148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.067406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.067440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.067680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.067714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.067886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.067919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.068111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.068145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.068269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.068302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.068565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.068598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.068722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.068755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.068864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.068898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.069952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.069984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.070111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.070144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.070251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.070284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.070410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.070443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.070702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.070735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.070935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.070966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.071187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.071326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.071563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.071698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.071860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.071969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.072008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.072193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.072223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.072400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.072430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.072557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.072588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.072772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.072802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.072978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.073019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.073146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.120 [2024-12-15 06:27:03.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.120 qpair failed and we were unable to recover it. 00:36:43.120 [2024-12-15 06:27:03.073298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.073331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.073509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.073541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.073672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.073705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.073888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.073921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.074209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.074245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.074437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.074471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.074713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.074788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.074975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.075057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.075255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.075292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.075482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.075517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.075697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.075732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.075848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.075882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.076058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.076096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.076345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.076379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.076528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.076561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.076744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.076778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.076968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.077143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.077378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.077524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.077760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.077909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.077943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.078065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.078226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.078260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.078441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.078475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.078645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.078679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.078946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.078979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.079110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.079144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.079281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.079315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.079502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.079536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.079654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.079689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.079862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.079896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.080157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.080195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.080385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.080419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.080614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.080648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.080820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.080853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.080971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.081012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.081140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.081174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.121 [2024-12-15 06:27:03.081281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.121 [2024-12-15 06:27:03.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.121 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.081432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.081466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.081642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.081675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.081915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.081948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.082085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.082120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.082235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.082268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.082452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.082486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.082602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.082636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.082868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.083109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.083262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.083406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.083854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.083983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.084026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.084265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.084491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.084527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.084721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.084754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.084877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.085021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.085056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.085236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.085269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.085441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.085475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.085615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.085650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.085783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.085816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.086943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.086977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.087127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.087162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.087344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.087377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.087560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.087594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.087771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.087805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.088934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.122 [2024-12-15 06:27:03.088968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.122 qpair failed and we were unable to recover it. 00:36:43.122 [2024-12-15 06:27:03.089154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.089189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.089456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.089491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.089709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.089744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.089858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.089891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.090084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.090120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.090300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.090334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.090454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.090487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.090612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.090645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.090889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.090922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.091067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.091242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.091276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.091401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.091434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.091551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.091585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.091856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.091889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.092954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.092988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.093123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.093363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.093398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.093518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.093559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.093665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.093699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.093823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.093857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.094921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.094955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.095149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.095184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.095369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.095403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.095608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.095642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.095767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.095800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.095933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.095966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.096089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.096124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.096262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.096295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.123 qpair failed and we were unable to recover it. 00:36:43.123 [2024-12-15 06:27:03.096470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.123 [2024-12-15 06:27:03.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.096622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.096657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.096830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.096862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.096979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.097152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.097376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.097535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.097751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.097909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.097942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.098130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.098166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.098409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.098442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.098569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.098725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.098758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.098956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.098989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.099191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.099225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.099396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.099430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.099621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.099779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.099814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.099939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.099974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.100197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.100341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.100492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.100645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.100862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.100977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.101921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.101955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.102123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.102305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.102339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.102522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.102767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.102874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.102908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.103030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.103066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.103188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.103222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.103403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.103438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.124 qpair failed and we were unable to recover it. 00:36:43.124 [2024-12-15 06:27:03.103621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.124 [2024-12-15 06:27:03.103653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.103788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.103822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.104905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.104939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.105905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.105977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.106175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.106334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.106489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.106854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.106966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.107126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.107355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.107516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.107729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.107938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.107972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.108956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.108990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.109180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.109214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.109343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.109376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.109517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.109551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.109677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.125 [2024-12-15 06:27:03.109710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.125 qpair failed and we were unable to recover it. 00:36:43.125 [2024-12-15 06:27:03.109918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.109950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.110142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.110176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.110352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.110385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.110500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.110533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.110643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.110677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.110866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.110901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.111014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.111051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.112485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.112538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.112743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.112779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.113197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.113417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.113565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.113710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.113925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.113958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.114145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.114180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.114362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.114397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.114589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.114623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.114765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.114798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.114934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.114967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.115155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.115189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.115324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.115358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.115482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.115515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.115621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.115654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.115841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.115875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.116850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.116883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.117950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.117984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.118204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.118357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.118512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.118681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.126 [2024-12-15 06:27:03.118842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.126 qpair failed and we were unable to recover it. 00:36:43.126 [2024-12-15 06:27:03.118963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.119905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.119938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.120954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.120988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.121116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.121149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.121369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.121526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.121558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.121678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.121718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.121894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.122119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.122154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.122331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.122364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.122483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.122515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.122689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.122722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.122898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.122931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.123111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.123144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.123328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.123361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.123540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.123573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.123773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.123812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.124833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.124865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.125939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.125972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.126109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.126142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.126247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.126280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.126388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.127 [2024-12-15 06:27:03.126421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.127 qpair failed and we were unable to recover it. 00:36:43.127 [2024-12-15 06:27:03.126526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.126558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.126683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.126715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.126886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.126919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.127862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.127896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.128118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.128263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.128486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.128640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.128866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.128979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.129146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.129370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.129509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.129870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.129900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.130874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.130999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.131209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.131358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.131798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.131933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.131961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.132073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.132295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.132325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.132520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.132553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.132729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.132763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.132935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.132967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.133106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.133137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.133315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.133344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.133547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.133578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.133826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.133857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.134035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.128 [2024-12-15 06:27:03.134067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.128 qpair failed and we were unable to recover it. 00:36:43.128 [2024-12-15 06:27:03.134238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.134269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.134396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.134429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.134598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.134748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.134781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.134912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.134944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.135067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.135098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.135201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.135232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.135329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.135541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.135573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.135681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.135714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.136038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.136072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.136206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.136237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.136445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.136652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.136685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.136863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.136896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.137021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.137055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.137184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.137217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.137405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.137437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.137674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.137708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.139120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.139168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.139419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.139451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.139679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.139884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.139914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.140050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.140083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.140249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.140293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.140422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.140455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.140650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.140683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.140863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.140895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.141039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.141070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.141242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.141272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.141439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.141469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.141605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.141639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.141891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.141924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.142188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.142222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.142393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.142426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.142651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.142854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.142884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.143053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.143085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.129 [2024-12-15 06:27:03.143270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.129 [2024-12-15 06:27:03.143303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.129 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.143433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.143467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.143592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.143625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.143743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.143777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.143961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.144855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.144980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.145945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.146130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.146339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.146571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.146717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.146873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.146986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.147028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.147305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.147448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.147482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.147607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.147640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.147749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.147782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.147962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.148196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.148399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.148541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.148691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.148883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.148914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.149921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.149951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.150273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.150549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.150582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.150786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.130 [2024-12-15 06:27:03.150819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.130 qpair failed and we were unable to recover it. 00:36:43.130 [2024-12-15 06:27:03.151006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.151166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.151391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.151555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.151692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.151904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.151937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.152936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.152967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.153214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.153250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.153427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.153457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.153565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.153596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.153693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.153724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.153841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.153871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.154843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.154877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.155906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.155937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.156131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.156162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.156339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.156370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.156604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.156634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.156837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.156867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.157901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.157930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.158068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.158099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.158203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.158233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.158339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.131 [2024-12-15 06:27:03.158370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.131 qpair failed and we were unable to recover it. 00:36:43.131 [2024-12-15 06:27:03.158595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.158629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.158807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.158840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.158958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.159192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.159346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.159561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.159775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.159925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.159957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.160136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.160261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.160294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.160419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.160458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.160641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.160675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.160850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.160883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.161904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.161937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.162145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.162181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.162375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.162408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.162519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.162552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.162669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.162702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.162889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.162922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.163940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.163973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.164203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.164563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.164701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.164849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.164972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.165139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.165354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.165490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.165704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.165861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.165894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.166040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.166076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.166251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.166284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.166416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.166449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.166571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.132 [2024-12-15 06:27:03.166604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.132 qpair failed and we were unable to recover it. 00:36:43.132 [2024-12-15 06:27:03.166777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.167067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.167101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.167277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.167310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.167417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.167450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.167577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.167610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.167780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.167819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.168826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.168859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.169869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.169990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.170160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.170361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.170567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.170783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.170937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.170969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.171908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.172133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.172168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.172274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.172307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.172488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.172562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.172732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.172800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.172939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.172977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.173102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.173136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.173314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.173348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.173533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.173565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.173755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.173789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.133 qpair failed and we were unable to recover it. 00:36:43.133 [2024-12-15 06:27:03.173912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.133 [2024-12-15 06:27:03.173946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.174091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.174127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.174280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.174391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.174425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.174602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.174636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.175829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.175862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.177276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.177331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.177609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.177645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.177762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.177796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.177907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.177941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.178181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.178216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.178357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.178390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.178520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.178554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.178666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.178699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.178895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.178929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.179045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.179080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.179268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.179302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.179485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.179518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.179638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.179672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.179848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.180066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.180101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.181523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.181575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.181716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.181964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.182010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.182185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.182217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.182416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.182449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.182639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.182671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.182852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.182886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.183934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.183968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.184184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.184340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.184374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.184484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.184516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.184692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.184726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.184850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.184883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.185015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.185051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.185235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.185275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.185398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.134 [2024-12-15 06:27:03.185432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.134 qpair failed and we were unable to recover it. 00:36:43.134 [2024-12-15 06:27:03.185542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.185576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.185713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.185746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.185967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.186035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.186172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.186204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.186329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.186362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.186606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.186639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.186816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.186849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.187112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.187147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.187380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.187413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.187610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.187643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.187797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.187829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.187960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.188003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.188150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.188183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.188434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.188649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.188682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.188788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.188820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.189051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.189167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.189200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.189395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.190821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.190874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.191126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.191163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.191294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.191328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.191526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.191558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.191758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.191790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.191901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.191935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.192093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.192129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.192304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.192336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.192461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.192495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.192675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.192709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.192915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.193102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.193137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.193244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.193277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.193419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.193576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.193609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.193813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.193847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.194037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.194072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.194315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.194456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.194490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.194606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.194645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.194790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.194824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.195030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.195065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.195329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.195362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.195604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.195638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.195761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.195793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.195967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.196007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.196128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.196161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.196274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.196307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.196413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.196445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.135 [2024-12-15 06:27:03.196642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.135 [2024-12-15 06:27:03.196677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.135 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.196949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.197124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.197159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.197332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.197365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.197494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.197528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.197660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.197693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.197890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.197925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.198072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.198291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.198508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.198657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.198794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.199022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.199143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.199175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.199304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.199338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.199508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.199730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.199765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.200071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.200216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.200419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.200624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.200839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.200962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.201142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.201297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.201504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.201881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.201914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.202917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.203830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.203858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.204030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.204063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.204236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.204268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.204514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.204547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.204790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.204822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.204965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.205006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.205131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.205165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.205348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.205382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.136 qpair failed and we were unable to recover it. 00:36:43.136 [2024-12-15 06:27:03.205556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.136 [2024-12-15 06:27:03.205588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.205706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.205739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.205859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.205892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.206083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.206121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.206303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.206336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.206468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.206511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.206724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.206769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.206938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.206980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.207217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.207262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.207471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.207704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.207743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.208024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.208251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.208518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.208548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.208658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.208688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.208886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.208916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.209912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.209942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.210076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.210109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.210297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.210327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.210433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.210464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.210588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.210812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.210844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.211040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.211072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.211195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.211226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.211410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.211440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.211617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.211647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.211828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.211859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.212856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.212888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.214076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.214108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.137 [2024-12-15 06:27:03.214280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.137 [2024-12-15 06:27:03.214311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.137 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.214568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.214601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.214713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.214745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.214916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.214947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.215163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.215196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.215371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.215408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.215528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.215559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.215658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.215689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.215863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.215894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.216056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b9/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1207928 Killed "${NVMF_APP[@]}" "$@" 00:36:43.421 0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.216243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.216274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.216443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.216659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.216690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:43.421 [2024-12-15 06:27:03.216869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.216900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.217017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.217049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:43.421 [2024-12-15 06:27:03.217178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.217208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.217311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.217342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:43.421 [2024-12-15 06:27:03.217499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.217572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.217718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.421 [2024-12-15 06:27:03.217755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.217968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.421 [2024-12-15 06:27:03.218134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.218289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.218517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.218677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.218890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.218924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.219123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.219160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.219269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.219303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.219408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.219443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.219684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.219718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.421 qpair failed and we were unable to recover it. 00:36:43.421 [2024-12-15 06:27:03.219896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.421 [2024-12-15 06:27:03.219938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.220172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.220413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.220447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.220694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.220728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.220858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.220892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.221070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.221106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.221301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.221334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.221466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.221500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.221617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.221651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.221780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.221813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.222926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.222962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.223193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.223264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.223794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.223833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.224018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.224054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.224210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.224405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.224438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.224704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.224738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.224858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.224894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.225043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.225081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1208786 00:36:43.422 [2024-12-15 06:27:03.225207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.225242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.225426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1208786 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.225573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.225607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.225803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.225842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1208786 ']' 00:36:43.422 [2024-12-15 06:27:03.225981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.226027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.422 [2024-12-15 06:27:03.226268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.226303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.226417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.226451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.422 [2024-12-15 06:27:03.226588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.226624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.422 [2024-12-15 06:27:03.226823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.226856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 [2024-12-15 06:27:03.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.422 [2024-12-15 06:27:03.227089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.422 qpair failed and we were unable to recover it. 00:36:43.422 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.422 [2024-12-15 06:27:03.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.423 [2024-12-15 06:27:03.227423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.227464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.227697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.227732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.227908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.227941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.228080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.228116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.228298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.228332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.228530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.228563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.228757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.228791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.228927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.228964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.229215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.229423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.229673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.229816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.229850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.229965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.230400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.230562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.230710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.230918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.230951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.231151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.231186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.231307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.231342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.231472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.231506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.231780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.231939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.231972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.232172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.232207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.232342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.232377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.232501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.232535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.232725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.232760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.232885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.232924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.233048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.233085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.233276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.233310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.233496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.233530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.233647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.233681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.233814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.233849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.234036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.234074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.234201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.234236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.234414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.234448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.234571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.423 [2024-12-15 06:27:03.234605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.423 qpair failed and we were unable to recover it. 00:36:43.423 [2024-12-15 06:27:03.234719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.234752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.234896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.234930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.235104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.235336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.235527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.235677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.235819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.235959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.236003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.236193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.236228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.236348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.236581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.236618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.236734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.237016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.237052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.237235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.237269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.237406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.237442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.237625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.237660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.237812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.238897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.238933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.239122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.239158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.239334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.239369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.239558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.239592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.239718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.239753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.239960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.240116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.240352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.240509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.240736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.240918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.241902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.424 [2024-12-15 06:27:03.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.424 qpair failed and we were unable to recover it. 00:36:43.424 [2024-12-15 06:27:03.242066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.242102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.242282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.242316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.242450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.242485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.242666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.242702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.242816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.242850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.242962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.243886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.243920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.244924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.244959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.245943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.245978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.246166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.246200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.246324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.246359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.246493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.246527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.246671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.246705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.246833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.246867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.247044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.247079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.247269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.247303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.247490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.247649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.247685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.247799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.425 [2024-12-15 06:27:03.247835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.425 qpair failed and we were unable to recover it. 00:36:43.425 [2024-12-15 06:27:03.248021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.248057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.248299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.248333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.248452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.248486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.248679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.248712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.248818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.249881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.249914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.250034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.250068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.250306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.250381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.250628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.250670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.250816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.250853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.250989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.251164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.251339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.251503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.251650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.251869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.251903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.252093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.252129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.252256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.252292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.252504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.252745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.252780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.252900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.253229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.253355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.253391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.253562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.253592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.253703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.253734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.253902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.253931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.254852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.254975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.426 [2024-12-15 06:27:03.255015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.426 qpair failed and we were unable to recover it. 00:36:43.426 [2024-12-15 06:27:03.255115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.255242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.255442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.255589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.255726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.255939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.256147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.256216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.256353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.256389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.256514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.256663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.256698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.256823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.257928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.257957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.258906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.259924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.259987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.260253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.260289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.260467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.260502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.260629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.260663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.260907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.260940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.261075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.261111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.261285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.261319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.261436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.261469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.427 [2024-12-15 06:27:03.261577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.427 [2024-12-15 06:27:03.261611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.427 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.261810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.261844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.262897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.262932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.263135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.263170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.263347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.263381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.263514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.263547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.263674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.263709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.263918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.263952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.264192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.264489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.264717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.264871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.264985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.265172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.265326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.265579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.265731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.265883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.265917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.266129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.266260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.266293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.266411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.266445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.266567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.266602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.266857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.267913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.267947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.268077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.268114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.268267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.268476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.268510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.268705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.268740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.428 [2024-12-15 06:27:03.268845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.428 [2024-12-15 06:27:03.268879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.428 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.269909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.269943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.270310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.270457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.270595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.270804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.271847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.271961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.272212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.272364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.272509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.272730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.272943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.272976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.273133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.273167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.273294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.273327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.273454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.273487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.273756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.273789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.273910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.273943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.274124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.274158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.274302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.274461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.274612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.274659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.274768] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:43.429 [2024-12-15 06:27:03.274816] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.429 [2024-12-15 06:27:03.274821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.274882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.275084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.275123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.275298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.275331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.275502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.275538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.275659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.429 [2024-12-15 06:27:03.275694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.429 qpair failed and we were unable to recover it. 00:36:43.429 [2024-12-15 06:27:03.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.275833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.275955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.275990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.276219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.276254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.276435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.276469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.276595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.276632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.276751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.276787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.276969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.277169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.277206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.277408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.277445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.277619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.277829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.277865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.277986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.278923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.278960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.279095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.279134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.279265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.279308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.279451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.279487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.279738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.279774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.279917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.280116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.280152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.280334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.280367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.280545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.280580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.280783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.280816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.280946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.280980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.281211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.281247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.281361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.281664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.281697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.281814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.281848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.282036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.282072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.430 [2024-12-15 06:27:03.282197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.430 [2024-12-15 06:27:03.282236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.430 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.282364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.282399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.282558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.282738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.282772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.282965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.283129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.283276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.283418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.283567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.283743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.284838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.284872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.285856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.285964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.286130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.286319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.286522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.286681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.286905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.286940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.287138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.287174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.287358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.287393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.287654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.287688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.287819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.287853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.288037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.288073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.288273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.288306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.288420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.288467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.288597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.288631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.288840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.288875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.289066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.431 [2024-12-15 06:27:03.289101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.431 qpair failed and we were unable to recover it. 00:36:43.431 [2024-12-15 06:27:03.289217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.289364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.289398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.289586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.289627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.289761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.289795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.290049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.290084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.290219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.290254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.290450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.290486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.290659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.290692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.290883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.290918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.291947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.291980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.292155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.292191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.292372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.292406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.292676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.292711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.292896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.292932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.293152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.293278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.293313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.293439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.293473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.293645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.293679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.293784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.293818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.294062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.294099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.294275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.294309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.294420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.294455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.294889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.294923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.295063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.295098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.295290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.295325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.295533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.295567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.295744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.295779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.295958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.296008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.296251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.296286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.296396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.296431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.296618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.296653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.296853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.296887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.432 [2024-12-15 06:27:03.297007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.432 [2024-12-15 06:27:03.297041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.432 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.297226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.297261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.297432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.297466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.297651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.297685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.297820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.297862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.297971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.298200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.298341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.298562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.298711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.298868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.299150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.299187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.299395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.299429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.299543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.299576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.299680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.299714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.299891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.299924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.300955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.300990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.301136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.301170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.301286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.301320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.301505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.301540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.301647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.301681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.301842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.301906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.302082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.302139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.302293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.302329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.302546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.302580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.302800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.302990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.303039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.303225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.303260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.303451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.303486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.303630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.303804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.303838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.304015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.304210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.304244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.304363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.433 [2024-12-15 06:27:03.304532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.433 [2024-12-15 06:27:03.304567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.433 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.304809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.304842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.305911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.305944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.306134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.306171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.306437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.306470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.306649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.306682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.306789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.306822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.307967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.308942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.308976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.309120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.309154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.309273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.309306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.309484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.309518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.309698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.309732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.309837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.309871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.310014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.310049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.310164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.310198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.310367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.310400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.310540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.310577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.310711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.310753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.434 [2024-12-15 06:27:03.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.434 [2024-12-15 06:27:03.311050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.434 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.311295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.311328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.311555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.311589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.311770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.311803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.311910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.311942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.312900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.312936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.313908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.314087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.314124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.314230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.314263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.314443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.314477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.314690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.314723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.314838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.314872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.315890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.315924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.316906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.316939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.317090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.317126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.317306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.317340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.317454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.317488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.317672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.317705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.317799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.317838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.435 [2024-12-15 06:27:03.318017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.435 [2024-12-15 06:27:03.318051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.435 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.318228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.318261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.318389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.318423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.318526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.318559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.318744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.318778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.318904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.318937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.319915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.319948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.320132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.320166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.320345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.320378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.320554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.320588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.320796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.320830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.320956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.320989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.321377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.321514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.321721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.321871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.322854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.322974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.323019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.323155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.323188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.323327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.323360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.323479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.323512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.323695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.323728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.436 [2024-12-15 06:27:03.324869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.436 [2024-12-15 06:27:03.324903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.436 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.325861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.325971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.326124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.326286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.326495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.326707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.326878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.326911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.327860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.327983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.328200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.328343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.328636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.328896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.328930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.329170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.329315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.329474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.329621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.329840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.329979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.330134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.330343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.330546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.330698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.330913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.331086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.331122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.331246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.331279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.331414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.331448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.437 qpair failed and we were unable to recover it. 00:36:43.437 [2024-12-15 06:27:03.331623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.437 [2024-12-15 06:27:03.331656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.331771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.331805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.332807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.332836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.333912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.333943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.334048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.334080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.334191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.334223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.334328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.334529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.334561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.336317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.336371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.336518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.336551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.336796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.336827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.336947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.336978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.337157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.337188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.337299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.337329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.337523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.337554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.337720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.337751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.337860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.337891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.338014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.338046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.338236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.338268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.338381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.338412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.338641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.338674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.338807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.338837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.339013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.339045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.339144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.339181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.339288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.339319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.339444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.339476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.438 [2024-12-15 06:27:03.339582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.438 [2024-12-15 06:27:03.339612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.438 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.339729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.339759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.340934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.340965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.341921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.341952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.342878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.342981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.343914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.343942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.344080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.344210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.344410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.344546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.344737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.344955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.345041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.345351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.345389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.439 qpair failed and we were unable to recover it. 00:36:43.439 [2024-12-15 06:27:03.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.439 [2024-12-15 06:27:03.345540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.345658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.345692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.345808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.345841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.345953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.345986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.346106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.346337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.346370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.346587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.346767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.346802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.347896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.347929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.348920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.348953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.349134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.349353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.349491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.349645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.349801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.349986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.350946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.350974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.351842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.351871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.352080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.440 [2024-12-15 06:27:03.352144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.440 qpair failed and we were unable to recover it. 00:36:43.440 [2024-12-15 06:27:03.352289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.352338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.352466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.352501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.352620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.352655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.352786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.352820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.352952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.352985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.353195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.353228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.353398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.353432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.353559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.353592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.353705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.353738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.353847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.353880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.354158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.354328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.354554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.354710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.354878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.355872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.355979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.356024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.356129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.356163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.356371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.356515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.356548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.356746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.356780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.356979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.357239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.357387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.357537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.357689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.357850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:43.441 [2024-12-15 06:27:03.358085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.358875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.358908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.441 [2024-12-15 06:27:03.359036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.441 [2024-12-15 06:27:03.359069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.441 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.359194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.359232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.359385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.359500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.359533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.359711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.359744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.359925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.359959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.360282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.360505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.360642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.360856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.360971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.361255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.361287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.361504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.361643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.361676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.361867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.361899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.362495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.362638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.362843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.362962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.363133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.363343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.363492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.363633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.363793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.363826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.364938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.364972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.365122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.365157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.365267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.365440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.365474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.365680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.365714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.365849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.365883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.366025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.366060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.442 qpair failed and we were unable to recover it. 00:36:43.442 [2024-12-15 06:27:03.366177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.442 [2024-12-15 06:27:03.366211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.366361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.366508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.366628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.366661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.366810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.366939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.366973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.367234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.367268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.367378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.367412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.367615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.367650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.367798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.367831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.367953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.367987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.368904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.368939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.369887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.370892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.370927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.371847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.371879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.443 qpair failed and we were unable to recover it. 00:36:43.443 [2024-12-15 06:27:03.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.443 [2024-12-15 06:27:03.372113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.372312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.372360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.372540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.372774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.372808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.372961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.373098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.373144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.373351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.373509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.373554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.373676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.373710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.373896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.373930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.374945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.374979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.375183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.375219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.375462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.375672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.375707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.375838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.375873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.376905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.376938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.377151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.377187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.377367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.377402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.377506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.377718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.377754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.377934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.377972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.378232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.378271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.378397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.378431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.378643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.378677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.378826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.378866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.379051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.379084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.379238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.379352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.379384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.379496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.444 [2024-12-15 06:27:03.379528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.444 qpair failed and we were unable to recover it. 00:36:43.444 [2024-12-15 06:27:03.379643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.379675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.379843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.379875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.379999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.445 [2024-12-15 06:27:03.380676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.445 [2024-12-15 06:27:03.380683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.445 [2024-12-15 06:27:03.380692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.445 [2024-12-15 06:27:03.380698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.445 [2024-12-15 06:27:03.380708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.380876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.380989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.381888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.381919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:43.445 [2024-12-15 06:27:03.382258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 [2024-12-15 06:27:03.382176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:43.445 [2024-12-15 06:27:03.382284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:43.445 [2024-12-15 06:27:03.382428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.382958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.382990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.383116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.383150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.383311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.383345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.383556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.383737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.383770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.383881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.383915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.384924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.385095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.385131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.385251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.385284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.445 [2024-12-15 06:27:03.385430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.445 qpair failed and we were unable to recover it. 00:36:43.445 [2024-12-15 06:27:03.385633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.385666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.385769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.385801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.385944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.385976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.386900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.386933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.387231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.387272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.387413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.387446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.387625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.387658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.387787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.387819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.387933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.387966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.388955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.388986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.389942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.389975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.390955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.390988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.391149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.391309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.391460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.391601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.391779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.391967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.392017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.446 [2024-12-15 06:27:03.392281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.446 [2024-12-15 06:27:03.392318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.446 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.392535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.392569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.392687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.392721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.392849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.392883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.393876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.393986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.394311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.394524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.394675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.394830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.394863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.395895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.395928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.396882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.396918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.397083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.397294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.397440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.397662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.397807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.397985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.398836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff284000b90 with addr=10.0.0.2, port=4420 00:36:43.447 qpair failed and we were unable to recover it. 00:36:43.447 [2024-12-15 06:27:03.398961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.447 [2024-12-15 06:27:03.399011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.399133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.399167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.399280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.399315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.399448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.399482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.399727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.399764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.399874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.399907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.400942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.401935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.401968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.402136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.402284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.402499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.402638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.402796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.402985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.403215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.403367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.403513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.403677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.403829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.403863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.404917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.404950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.405064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.405099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.405297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.448 [2024-12-15 06:27:03.405333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.448 qpair failed and we were unable to recover it. 00:36:43.448 [2024-12-15 06:27:03.405441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.405476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.405667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.405701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.405806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.405841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.405978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.406847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.406962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.407116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.407407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.407549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.407930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.407964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.408894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.408928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.409935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.409970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.410088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.410123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.410319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.410463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.410497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.410625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.410659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.410781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.410815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.411856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.412028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.412063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.412188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.412413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.412448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.412563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.412597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.449 qpair failed and we were unable to recover it. 00:36:43.449 [2024-12-15 06:27:03.412711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.449 [2024-12-15 06:27:03.412745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.412981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.413947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.413981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.414102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.414139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.414264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.414412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.414446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.414633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.414667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.414860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.414894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.415926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.415960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.416087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.416123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.416301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.416335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.416459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.416493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.416737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.416771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.416876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.417045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.417080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.417192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.417225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.417403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.417436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.417585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.417768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.417805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.418002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.418037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.418168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.418201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.418390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.418424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.418653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.418688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.418864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.418898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.419817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.419851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.420024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.420059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.420228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.450 [2024-12-15 06:27:03.420441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.450 [2024-12-15 06:27:03.420502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.450 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.420646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.420690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.420826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.420861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.420971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.421199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.421363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.421613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.421755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.421908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.421941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.422148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.422182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.422377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.422410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.422580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.422614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.422795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.422829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.422956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.423865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.423988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.424140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.424291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.424446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.424681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.424834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.424868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.425054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.425091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.425211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.425245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.425378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.425412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.425656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.425689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.425929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.425964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.426096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.426128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.426312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.426346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.426730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.426892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.426925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.427118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.427154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.427292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.427325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.427442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.427475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.427661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.427695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.427879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.427913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.428044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.428088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.428289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.428324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.428529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.428564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.428680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.428713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.451 qpair failed and we were unable to recover it. 00:36:43.451 [2024-12-15 06:27:03.428823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.451 [2024-12-15 06:27:03.428857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.429872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.429905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.430117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.430280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.430515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.430669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.430881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.430989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.431039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.431251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.431448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.431482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.431682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.431719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.431866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.432922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.432955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.433128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.433339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.433476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.433625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.433845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.433963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.434124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.434282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.434434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.434590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.434800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.434833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.435866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.435901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.436026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.452 [2024-12-15 06:27:03.436063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.452 qpair failed and we were unable to recover it. 00:36:43.452 [2024-12-15 06:27:03.436180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.436215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.436424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.436565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.436598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.436745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.436779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.436891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.436925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.437859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.437895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.438826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.438861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.439939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.439972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.440936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.440969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.441105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.441139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.441258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.441291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.441538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.441571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.441765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.441798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.442955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.442988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.453 [2024-12-15 06:27:03.443857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.453 [2024-12-15 06:27:03.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.453 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.444930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.444960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.445923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.445953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.446090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.446294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.446502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.446657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.446794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.446968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.447958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.448905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.448935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.449908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.449939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.450870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.450897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.454 [2024-12-15 06:27:03.451003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.454 [2024-12-15 06:27:03.451034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.454 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.451174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.451383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.451516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.451726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.451867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.451976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.452803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.452835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.453932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.453963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.454873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.454901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.455130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.455165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.455399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.455427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.455597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.455625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.455788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.455817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.455925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.455952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.456913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.456942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.457126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.457155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.457267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.457296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.457460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.457488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.457602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.457629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.457830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.458006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.458035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.458137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.458165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.458398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.458427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.458523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.455 [2024-12-15 06:27:03.458550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.455 qpair failed and we were unable to recover it. 00:36:43.455 [2024-12-15 06:27:03.458654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.458683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.458956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.458983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.459173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.459299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.459327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.459568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.459595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.459723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.459752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.459848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.459876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.460910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.460937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.461873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.461905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.462868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.462896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.463083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.463113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.463271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.463299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.463533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.463561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.463727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.463755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.463880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.463907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.464887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.464986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.465131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.465323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.465449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.465647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.456 qpair failed and we were unable to recover it. 00:36:43.456 [2024-12-15 06:27:03.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.456 [2024-12-15 06:27:03.465808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.465983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.466177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.466398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.466552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.466757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.466940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.466967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.467203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.467252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.467453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.467487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.467586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.467620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.467809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.467844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.468027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.468062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.468330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.468364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.468574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.468745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.468779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.468906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.468939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.469148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.469196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.469385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.469417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.469685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.469718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.469926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.469958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.470098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.457 [2024-12-15 06:27:03.470289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.470497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.470616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.470803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.470928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.470956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.457 [2024-12-15 06:27:03.471148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.471176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:43.457 [2024-12-15 06:27:03.471367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.471395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.471494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.471523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.457 [2024-12-15 06:27:03.471686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.471713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.471882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.471909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.472079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.472108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.472216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.472246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.472411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.472439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.472686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.472715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.472830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.472857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.473926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.473954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.474084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.474113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.474229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.474257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.474419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.474447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.474710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.474743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.457 [2024-12-15 06:27:03.474859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.457 [2024-12-15 06:27:03.474892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.457 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.475065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.475350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.475385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.475492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.475527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.475655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.475791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.475823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.476029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.476064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.476338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.476372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.476553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.476592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.476842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.476874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.476980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.477836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.478017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.478238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.478275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.478400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.478433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.478621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.478654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.478868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.478902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.479040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.479075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.479287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.479321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.479560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.479592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.479713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.479746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.479925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.479957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.480106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.480238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.480271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.480387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.480421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.480560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.480593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.480855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.480886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.481953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.481985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.482180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.482214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.482423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.482533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.482568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.482763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.482795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.482928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.482963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 A controller has encountered a failure and is being reset. 00:36:43.458 [2024-12-15 06:27:03.483112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.483149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.483272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.483306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.483477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.483515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.483628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.483664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.483796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.483832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.483958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.484135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.484187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.484395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.484561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.484594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.458 [2024-12-15 06:27:03.484716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.458 [2024-12-15 06:27:03.484749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.458 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.484881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.484914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.485895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.485929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.486837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.486966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.487019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.487197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.487230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.487430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.487462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.487665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.487697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.487882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.487914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.488038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.488073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.488315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.488349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.488457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.488489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.488770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.488802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.488909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.488941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.489094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.489127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.489249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.489282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.489453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.489486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.489673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.489706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.489950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.489983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.490144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.490178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.490291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.490324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.490505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.490537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.490661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.490695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.490803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.490835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.491014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.491051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.491233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.491267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.491467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.491499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.491645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.491683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff288000b90 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.492035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.492222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.492256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.492441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.492474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.492614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.492648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.492890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.492923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.493111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.493145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.493379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.493413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.493541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.493757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.459 [2024-12-15 06:27:03.493791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.459 qpair failed and we were unable to recover it. 00:36:43.459 [2024-12-15 06:27:03.493917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.493951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.494077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.494113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.494338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.494372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.494552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.494585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.494835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.494869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.495101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.495278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.495483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.495629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.495779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.495976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.496161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.496312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.496473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.496691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.496837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.496870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.497926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.497960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.498155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.498191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.498341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.498373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.498556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.498761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.498794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.498969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.499206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.499368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.499582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.499723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.499869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.499902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.500882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.501121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.501155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.501264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.501297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.501476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.501510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.501642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.501676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.501793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.501827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.502047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.502209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.502413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.502628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.502785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.502959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.503004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.503159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.503193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.503368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.503400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.503577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.460 [2024-12-15 06:27:03.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.460 qpair failed and we were unable to recover it. 00:36:43.460 [2024-12-15 06:27:03.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.503781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.503883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.503917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.461 [2024-12-15 06:27:03.507649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.507709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.507946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.507983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.461 [2024-12-15 06:27:03.508198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.461 [2024-12-15 06:27:03.508400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.508668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.508701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.508809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.508842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.508974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.509145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.509369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.509514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.509670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.509883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.510912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.510945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.511068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.511103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.511230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.511263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.511378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.511411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.511590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.511624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.511861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.511894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff290000b90 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.512849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.512888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.513334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.513488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.513655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.513866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.513975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.514016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.514260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.514292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.514411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.514445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.514633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.514666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.514838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.514872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.515047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.515413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.515583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.515800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.515975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.516144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.516178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.516294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.516328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.461 qpair failed and we were unable to recover it. 00:36:43.461 [2024-12-15 06:27:03.516456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.461 [2024-12-15 06:27:03.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.516732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.516766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.516878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.516911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.517104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.517139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.517356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.517389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.517574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.517608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.517783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.517816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.517966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.518222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.518262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.518384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.518418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.518569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.518831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.518984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.519136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.519278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.519421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.519576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.519731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.519765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.520022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.520057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.520298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.520331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.520523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.520557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.520663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.520697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.520872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.520905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.521098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.521133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.521319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.521352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.521533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.521566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.521783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.521980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.522943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.522975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.523096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.523130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.523254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.523293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.523478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.523510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.523617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.523651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.523838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.523871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.524154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.524188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.524307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.524341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.524535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.524568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.524931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.524964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.525216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.525250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.525436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.525468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.525650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.525683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.525808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.525842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.525966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.526010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.462 qpair failed and we were unable to recover it. 00:36:43.462 [2024-12-15 06:27:03.526135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.462 [2024-12-15 06:27:03.526168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.526289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.526321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.526518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.526551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.526680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.526713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.526902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.526934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.527205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.527240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.527513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.527730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.527763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.527888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.527922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.528158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.528193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.528366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.528399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.528573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.528606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.528725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.528758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.528939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.528977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.529171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.529205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.529317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.529350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.529532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.529740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.529774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.529984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.530152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.530363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.530520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.530800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.530952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.531117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.531150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.531276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.531310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.531487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.531521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.531639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.531672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.531861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.531894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.532089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.532124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.532302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.532336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.532525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.532558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.463 qpair failed and we were unable to recover it. 00:36:43.463 [2024-12-15 06:27:03.532791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.463 [2024-12-15 06:27:03.532824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.533011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.533045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.533176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.533209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.533382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.533414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.533601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.533633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.533811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.533844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.534019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.534053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.534178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.534212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.534395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.534429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.464 [2024-12-15 06:27:03.534564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.464 [2024-12-15 06:27:03.534597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.464 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.534841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.534875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.535068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.535104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.535214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.535248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.535440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.535473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.535657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.535690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.535889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.535922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.536107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.536143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.536266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.536424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.536459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.536593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.536878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.536911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.537060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89cd0 with addr=10.0.0.2, port=4420 00:36:43.723 qpair failed and we were unable to recover it. 00:36:43.723 [2024-12-15 06:27:03.537268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:43.723 [2024-12-15 06:27:03.537353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97c70 with addr=10.0.0.2, port=4420 00:36:43.723 [2024-12-15 06:27:03.537387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97c70 is same with the state(6) to be set 00:36:43.723 [2024-12-15 06:27:03.537422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97c70 (9): Bad file descriptor 00:36:43.723 [2024-12-15 06:27:03.537450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:36:43.723 [2024-12-15 06:27:03.537472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:36:43.723 [2024-12-15 06:27:03.537496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:36:43.723 Unable to reset the controller. 00:36:43.723 Malloc0 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.723 [2024-12-15 06:27:03.554366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.723 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.724 [2024-12-15 06:27:03.582549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.724 06:27:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1208103 00:36:44.661 Controller properly reset. 00:36:49.930 Initializing NVMe Controllers 00:36:49.930 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:49.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:49.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:49.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:49.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:49.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:49.930 Initialization complete. Launching workers. 00:36:49.930 Starting thread on core 1 00:36:49.930 Starting thread on core 2 00:36:49.930 Starting thread on core 3 00:36:49.930 Starting thread on core 0 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:49.930 00:36:49.930 real 0m10.571s 00:36:49.930 user 0m34.684s 00:36:49.930 sys 0m6.162s 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:49.930 ************************************ 00:36:49.930 END TEST nvmf_target_disconnect_tc2 00:36:49.930 ************************************ 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:49.930 rmmod nvme_tcp 00:36:49.930 rmmod nvme_fabrics 00:36:49.930 rmmod nvme_keyring 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1208786 ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1208786 ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208786' 00:36:49.930 killing process with pid 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1208786 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:49.930 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.931 06:27:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.835 00:36:51.835 real 0m19.271s 00:36:51.835 user 1m1.349s 00:36:51.835 sys 0m11.278s 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:51.835 ************************************ 00:36:51.835 END TEST nvmf_target_disconnect 00:36:51.835 ************************************ 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:51.835 00:36:51.835 real 7m22.203s 00:36:51.835 user 17m5.302s 00:36:51.835 sys 2m10.413s 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.835 06:27:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.835 ************************************ 00:36:51.835 END TEST nvmf_host 00:36:51.835 ************************************ 00:36:51.835 06:27:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:51.835 06:27:11 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:51.835 06:27:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:51.835 06:27:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.835 06:27:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.835 06:27:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.835 ************************************ 00:36:51.835 START TEST nvmf_target_core_interrupt_mode 00:36:51.835 ************************************ 00:36:51.836 06:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:52.095 * Looking for test storage... 00:36:52.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:52.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.095 --rc genhtml_branch_coverage=1 00:36:52.095 --rc genhtml_function_coverage=1 00:36:52.095 --rc genhtml_legend=1 00:36:52.095 --rc geninfo_all_blocks=1 00:36:52.095 --rc geninfo_unexecuted_blocks=1 00:36:52.095 00:36:52.095 ' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:52.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.095 --rc genhtml_branch_coverage=1 00:36:52.095 --rc genhtml_function_coverage=1 00:36:52.095 --rc genhtml_legend=1 00:36:52.095 --rc geninfo_all_blocks=1 00:36:52.095 --rc geninfo_unexecuted_blocks=1 00:36:52.095 00:36:52.095 ' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:52.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.095 --rc genhtml_branch_coverage=1 00:36:52.095 --rc genhtml_function_coverage=1 00:36:52.095 --rc genhtml_legend=1 00:36:52.095 --rc geninfo_all_blocks=1 00:36:52.095 --rc geninfo_unexecuted_blocks=1 00:36:52.095 00:36:52.095 ' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:52.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.095 --rc genhtml_branch_coverage=1 00:36:52.095 --rc genhtml_function_coverage=1 00:36:52.095 --rc genhtml_legend=1 00:36:52.095 --rc geninfo_all_blocks=1 00:36:52.095 --rc geninfo_unexecuted_blocks=1 00:36:52.095 00:36:52.095 ' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:52.095 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:52.096 ************************************ 00:36:52.096 START TEST nvmf_abort 00:36:52.096 ************************************ 00:36:52.096 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:52.355 * Looking for test storage... 00:36:52.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:52.355 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:52.355 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:52.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.356 --rc genhtml_branch_coverage=1 00:36:52.356 --rc genhtml_function_coverage=1 00:36:52.356 --rc genhtml_legend=1 00:36:52.356 --rc geninfo_all_blocks=1 00:36:52.356 --rc geninfo_unexecuted_blocks=1 00:36:52.356 00:36:52.356 ' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:52.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.356 --rc genhtml_branch_coverage=1 00:36:52.356 --rc genhtml_function_coverage=1 00:36:52.356 --rc genhtml_legend=1 00:36:52.356 --rc geninfo_all_blocks=1 00:36:52.356 --rc geninfo_unexecuted_blocks=1 00:36:52.356 00:36:52.356 ' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:52.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.356 --rc genhtml_branch_coverage=1 00:36:52.356 --rc genhtml_function_coverage=1 00:36:52.356 --rc genhtml_legend=1 00:36:52.356 --rc geninfo_all_blocks=1 00:36:52.356 --rc geninfo_unexecuted_blocks=1 00:36:52.356 00:36:52.356 ' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:52.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.356 --rc genhtml_branch_coverage=1 00:36:52.356 --rc genhtml_function_coverage=1 00:36:52.356 --rc genhtml_legend=1 00:36:52.356 --rc geninfo_all_blocks=1 00:36:52.356 --rc geninfo_unexecuted_blocks=1 00:36:52.356 00:36:52.356 ' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:52.356 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:52.357 06:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:58.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:58.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:58.926 Found net devices under 0000:af:00.0: cvl_0_0 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.926 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:58.926 Found net devices under 0000:af:00.1: cvl_0_1 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:58.927 06:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:58.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:58.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:36:58.927 00:36:58.927 --- 10.0.0.2 ping statistics --- 00:36:58.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.927 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:58.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:58.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:36:58.927 00:36:58.927 --- 10.0.0.1 ping statistics --- 00:36:58.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.927 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1213745 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1213745 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1213745 ']' 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 [2024-12-15 06:27:18.287521] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:58.927 [2024-12-15 06:27:18.288494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:58.927 [2024-12-15 06:27:18.288532] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.927 [2024-12-15 06:27:18.366474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:58.927 [2024-12-15 06:27:18.389039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.927 [2024-12-15 06:27:18.389074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.927 [2024-12-15 06:27:18.389082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.927 [2024-12-15 06:27:18.389088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.927 [2024-12-15 06:27:18.389093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.927 [2024-12-15 06:27:18.390406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:58.927 [2024-12-15 06:27:18.390517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.927 [2024-12-15 06:27:18.390518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.927 [2024-12-15 06:27:18.453049] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:58.927 [2024-12-15 06:27:18.453948] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:58.927 [2024-12-15 06:27:18.454302] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:58.927 [2024-12-15 06:27:18.454409] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 [2024-12-15 06:27:18.519248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 Malloc0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 Delay0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.927 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.928 [2024-12-15 06:27:18.607194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.928 06:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:58.928 [2024-12-15 06:27:18.736322] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:00.831 Initializing NVMe Controllers 00:37:00.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:00.831 controller IO queue size 128 less than required 00:37:00.831 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:00.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:00.831 Initialization complete. Launching workers. 00:37:00.831 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37649 00:37:00.831 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37706, failed to submit 66 00:37:00.831 success 37649, unsuccessful 57, failed 0 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:00.831 rmmod nvme_tcp 00:37:00.831 rmmod nvme_fabrics 00:37:00.831 rmmod nvme_keyring 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1213745 ']' 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1213745 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1213745 ']' 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1213745 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213745 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213745' 00:37:00.831 killing process with pid 1213745 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1213745 00:37:00.831 06:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1213745 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.091 06:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.996 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:03.255 00:37:03.255 real 0m10.963s 00:37:03.255 user 0m10.204s 00:37:03.255 sys 0m5.515s 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:03.255 ************************************ 00:37:03.255 END TEST nvmf_abort 00:37:03.255 ************************************ 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.255 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:03.255 ************************************ 00:37:03.256 START TEST nvmf_ns_hotplug_stress 00:37:03.256 ************************************ 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:03.256 * Looking for test storage... 00:37:03.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.256 --rc genhtml_branch_coverage=1 00:37:03.256 --rc genhtml_function_coverage=1 00:37:03.256 --rc genhtml_legend=1 00:37:03.256 --rc geninfo_all_blocks=1 00:37:03.256 --rc geninfo_unexecuted_blocks=1 00:37:03.256 00:37:03.256 ' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.256 --rc genhtml_branch_coverage=1 00:37:03.256 --rc genhtml_function_coverage=1 00:37:03.256 --rc genhtml_legend=1 00:37:03.256 --rc geninfo_all_blocks=1 00:37:03.256 --rc geninfo_unexecuted_blocks=1 00:37:03.256 00:37:03.256 ' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.256 --rc genhtml_branch_coverage=1 00:37:03.256 --rc genhtml_function_coverage=1 00:37:03.256 --rc genhtml_legend=1 00:37:03.256 --rc geninfo_all_blocks=1 00:37:03.256 --rc geninfo_unexecuted_blocks=1 00:37:03.256 00:37:03.256 ' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.256 --rc genhtml_branch_coverage=1 00:37:03.256 --rc genhtml_function_coverage=1 00:37:03.256 --rc genhtml_legend=1 00:37:03.256 --rc geninfo_all_blocks=1 00:37:03.256 --rc geninfo_unexecuted_blocks=1 00:37:03.256 00:37:03.256 ' 00:37:03.256 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.515 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.516 06:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:10.085 06:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:10.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:10.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:10.085 Found net devices under 0000:af:00.0: cvl_0_0 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:10.085 Found net devices under 0000:af:00.1: cvl_0_1 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:10.085 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:10.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:10.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:37:10.086 00:37:10.086 --- 10.0.0.2 ping statistics --- 00:37:10.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.086 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:10.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:10.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:37:10.086 00:37:10.086 --- 10.0.0.1 ping statistics --- 00:37:10.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:10.086 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1217642 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1217642 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1217642 ']' 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:10.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.086 [2024-12-15 06:27:29.451870] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:10.086 [2024-12-15 06:27:29.452767] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:10.086 [2024-12-15 06:27:29.452802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.086 [2024-12-15 06:27:29.534773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:10.086 [2024-12-15 06:27:29.556428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:10.086 [2024-12-15 06:27:29.556463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:10.086 [2024-12-15 06:27:29.556471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:10.086 [2024-12-15 06:27:29.556477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:10.086 [2024-12-15 06:27:29.556482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:10.086 [2024-12-15 06:27:29.561009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:10.086 [2024-12-15 06:27:29.561100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.086 [2024-12-15 06:27:29.561100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:10.086 [2024-12-15 06:27:29.622763] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:10.086 [2024-12-15 06:27:29.622764] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:10.086 [2024-12-15 06:27:29.623353] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:10.086 [2024-12-15 06:27:29.623551] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:10.086 [2024-12-15 06:27:29.849775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.086 06:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:10.086 06:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.345 [2024-12-15 06:27:30.254220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.345 06:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:10.345 06:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:10.604 Malloc0 00:37:10.604 06:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:10.863 Delay0 00:37:10.863 06:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.122 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:11.122 NULL1 00:37:11.122 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:11.381 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1217930 00:37:11.381 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:11.381 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:11.381 06:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.759 Read completed with error (sct=0, sc=11) 00:37:12.759 06:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.759 06:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:12.759 06:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:13.017 true 00:37:13.017 06:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:13.017 06:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.953 06:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.953 06:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:13.953 06:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:14.211 true 00:37:14.211 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:14.211 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.470 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.470 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:14.470 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:14.728 true 00:37:14.728 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:14.728 06:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.663 06:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.922 06:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:15.922 06:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:16.180 true 00:37:16.180 06:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:16.180 06:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.115 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.115 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:17.115 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:17.374 true 00:37:17.374 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:17.374 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.632 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.891 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:17.891 06:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:17.891 true 00:37:18.150 06:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:18.150 06:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.085 06:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:19.375 06:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:19.375 06:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:19.671 true 00:37:19.671 06:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:19.671 06:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.607 06:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.607 06:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:20.607 06:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:20.865 true 00:37:20.865 06:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:20.865 06:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.123 06:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.123 06:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:21.123 06:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:21.382 true 00:37:21.382 06:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:21.382 06:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 06:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.758 06:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:22.758 06:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:23.017 true 00:37:23.017 06:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:23.017 06:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.953 06:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.953 06:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:23.953 06:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:24.211 true 00:37:24.211 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:24.211 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.211 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.469 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:24.469 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:24.727 true 00:37:24.727 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:24.727 06:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.658 06:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.915 06:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:25.915 06:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:26.173 true 00:37:26.173 06:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:26.173 06:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:27.108 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.108 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:27.108 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:27.367 true 00:37:27.367 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:27.367 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.626 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.884 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:27.884 06:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:28.142 true 00:37:28.142 06:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:28.142 06:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.081 06:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.343 06:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:29.343 06:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:29.601 true 00:37:29.601 06:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:29.601 06:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.540 06:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.540 06:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:30.540 06:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:30.798 true 00:37:30.799 06:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:30.799 06:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.057 06:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.316 06:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:31.316 06:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:31.316 true 00:37:31.316 06:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:31.316 06:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.694 06:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.694 06:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:32.694 06:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:32.694 true 00:37:32.953 06:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:32.953 06:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.953 06:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.212 06:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:33.212 06:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:33.470 true 00:37:33.471 06:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:33.471 06:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 06:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.849 06:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:34.849 06:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:35.107 true 00:37:35.107 06:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:35.107 06:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.129 06:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.129 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:36.129 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:36.129 true 00:37:36.129 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:36.129 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.388 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.646 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:36.646 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:36.905 true 00:37:36.905 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:36.905 06:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.841 06:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.100 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:38.100 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:38.100 true 00:37:38.359 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:38.359 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.359 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.618 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:38.618 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:38.877 true 00:37:38.877 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:38.877 06:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 06:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.070 06:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:40.070 06:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:40.328 true 00:37:40.328 06:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:40.328 06:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:41.265 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.265 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:41.265 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:41.524 true 00:37:41.524 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:41.524 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.524 Initializing NVMe Controllers 00:37:41.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:41.524 Controller IO queue size 128, less than required. 00:37:41.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.524 Controller IO queue size 128, less than required. 00:37:41.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:41.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:41.524 Initialization complete. Launching workers. 00:37:41.524 ======================================================== 00:37:41.524 Latency(us) 00:37:41.524 Device Information : IOPS MiB/s Average min max 00:37:41.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1947.88 0.95 45067.26 1991.29 1016036.00 00:37:41.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18261.63 8.92 7009.02 1576.23 368627.99 00:37:41.524 ======================================================== 00:37:41.524 Total : 20209.50 9.87 10677.23 1576.23 1016036.00 00:37:41.524 00:37:41.782 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.041 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:42.041 06:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:42.041 true 00:37:42.299 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1217930 00:37:42.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1217930) - No such process 00:37:42.299 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1217930 00:37:42.299 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:42.299 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:42.559 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:42.559 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:42.559 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:42.559 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.559 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:42.817 null0 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:42.817 null1 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.817 06:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:43.076 null2 00:37:43.076 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.076 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.076 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:43.335 null3 00:37:43.335 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.335 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.335 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:43.335 null4 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:43.594 null5 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.594 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:43.853 null6 00:37:43.853 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.853 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.853 06:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:44.112 null7 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.112 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1223119 1223121 1223122 1223124 1223126 1223128 1223130 1223132 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:44.113 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.372 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:44.631 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:44.889 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.890 06:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.148 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.149 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.149 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.149 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.149 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.407 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.665 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.924 06:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.183 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.442 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.443 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.702 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.960 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.961 06:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.220 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.479 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.738 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:47.997 06:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.997 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.256 rmmod nvme_tcp 00:37:48.256 rmmod nvme_fabrics 00:37:48.256 rmmod nvme_keyring 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:48.256 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1217642 ']' 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1217642 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1217642 ']' 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1217642 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217642 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217642' 00:37:48.257 killing process with pid 1217642 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1217642 00:37:48.257 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1217642 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.516 06:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.052 00:37:51.052 real 0m47.375s 00:37:51.052 user 2m57.283s 00:37:51.052 sys 0m19.221s 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:51.052 ************************************ 00:37:51.052 END TEST nvmf_ns_hotplug_stress 00:37:51.052 ************************************ 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.052 ************************************ 00:37:51.052 START TEST nvmf_delete_subsystem 00:37:51.052 ************************************ 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:51.052 * Looking for test storage... 00:37:51.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.052 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.052 --rc genhtml_branch_coverage=1 00:37:51.052 --rc genhtml_function_coverage=1 00:37:51.052 --rc genhtml_legend=1 00:37:51.052 --rc geninfo_all_blocks=1 00:37:51.052 --rc geninfo_unexecuted_blocks=1 00:37:51.052 00:37:51.052 ' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.053 06:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:56.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:56.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:56.338 Found net devices under 0000:af:00.0: cvl_0_0 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.338 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:56.339 Found net devices under 0000:af:00.1: cvl_0_1 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.339 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:37:56.598 00:37:56.598 --- 10.0.0.2 ping statistics --- 00:37:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.598 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:37:56.598 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:37:56.598 00:37:56.598 --- 10.0.0.1 ping statistics --- 00:37:56.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.598 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:56.599 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1227383 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1227383 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1227383 ']' 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.858 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.858 [2024-12-15 06:28:16.809790] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.858 [2024-12-15 06:28:16.810757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:56.858 [2024-12-15 06:28:16.810795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.858 [2024-12-15 06:28:16.890340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:56.858 [2024-12-15 06:28:16.912552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.858 [2024-12-15 06:28:16.912589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.858 [2024-12-15 06:28:16.912600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.858 [2024-12-15 06:28:16.912606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.858 [2024-12-15 06:28:16.912612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.858 [2024-12-15 06:28:16.913718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.858 [2024-12-15 06:28:16.913721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.858 [2024-12-15 06:28:16.977074] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:56.858 [2024-12-15 06:28:16.977684] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:56.858 [2024-12-15 06:28:16.977827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:57.118 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.118 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:57.118 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:57.118 06:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 [2024-12-15 06:28:17.042502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 [2024-12-15 06:28:17.070770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 NULL1 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 Delay0 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1227434 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:57.118 06:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:57.118 [2024-12-15 06:28:17.182726] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:59.022 06:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.022 06:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.022 06:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 [2024-12-15 06:28:19.281056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a7c0 is same with the state(6) to be set 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 starting I/O failed: -6 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.281 Write completed with error (sct=0, sc=8) 00:37:59.281 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 Write completed with error (sct=0, sc=8) 00:37:59.282 Read completed with error (sct=0, sc=8) 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:37:59.282 starting I/O failed: -6 00:38:00.219 [2024-12-15 06:28:20.236867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x878190 is same with the state(6) to be set 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 [2024-12-15 06:28:20.281653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa93000d060 is same with the state(6) to be set 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 [2024-12-15 06:28:20.281796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a5e0 is same with the state(6) to be set 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 [2024-12-15 06:28:20.281985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa93000d800 is same with the state(6) to be set 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Write completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 Read completed with error (sct=0, sc=8) 00:38:00.220 [2024-12-15 06:28:20.282671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a400 is same with the state(6) to be set 00:38:00.220 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.220 Initializing NVMe Controllers 00:38:00.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:00.220 Controller IO queue size 128, less than required. 00:38:00.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:00.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:00.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:00.220 Initialization complete. Launching workers. 00:38:00.220 ======================================================== 00:38:00.220 Latency(us) 00:38:00.220 Device Information : IOPS MiB/s Average min max 00:38:00.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.69 0.08 903649.42 362.69 1012849.20 00:38:00.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.53 0.09 908428.15 513.76 1012721.69 00:38:00.220 ======================================================== 00:38:00.220 Total : 351.22 0.17 906173.78 362.69 1012849.20 00:38:00.220 00:38:00.220 [2024-12-15 06:28:20.283516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x878190 (9): Bad file descriptor 00:38:00.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:00.220 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:00.220 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227434 00:38:00.220 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227434 00:38:00.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1227434) - No such process 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1227434 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1227434 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1227434 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.788 [2024-12-15 06:28:20.814769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1227950 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:00.788 06:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:00.788 [2024-12-15 06:28:20.899497] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:01.355 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:01.355 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:01.355 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:01.922 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:01.922 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:01.922 06:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.489 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.489 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:02.489 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.747 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.747 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:02.747 06:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.314 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.314 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:03.314 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.882 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.882 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:03.882 06:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:04.140 Initializing NVMe Controllers 00:38:04.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:04.140 Controller IO queue size 128, less than required. 00:38:04.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:04.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:04.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:04.140 Initialization complete. Launching workers. 00:38:04.140 ======================================================== 00:38:04.140 Latency(us) 00:38:04.140 Device Information : IOPS MiB/s Average min max 00:38:04.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001955.74 1000132.61 1005565.65 00:38:04.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004041.89 1000157.67 1040823.54 00:38:04.140 ======================================================== 00:38:04.140 Total : 256.00 0.12 1002998.81 1000132.61 1040823.54 00:38:04.140 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227950 00:38:04.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1227950) - No such process 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1227950 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:04.399 rmmod nvme_tcp 00:38:04.399 rmmod nvme_fabrics 00:38:04.399 rmmod nvme_keyring 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1227383 ']' 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1227383 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1227383 ']' 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1227383 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:04.399 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227383 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227383' 00:38:04.400 killing process with pid 1227383 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1227383 00:38:04.400 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1227383 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.659 06:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:07.194 00:38:07.194 real 0m16.054s 00:38:07.194 user 0m26.094s 00:38:07.194 sys 0m6.001s 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.194 ************************************ 00:38:07.194 END TEST nvmf_delete_subsystem 00:38:07.194 ************************************ 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:07.194 ************************************ 00:38:07.194 START TEST nvmf_host_management 00:38:07.194 ************************************ 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:07.194 * Looking for test storage... 00:38:07.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:07.194 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:07.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.195 --rc genhtml_branch_coverage=1 00:38:07.195 --rc genhtml_function_coverage=1 00:38:07.195 --rc genhtml_legend=1 00:38:07.195 --rc geninfo_all_blocks=1 00:38:07.195 --rc geninfo_unexecuted_blocks=1 00:38:07.195 00:38:07.195 ' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:07.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.195 --rc genhtml_branch_coverage=1 00:38:07.195 --rc genhtml_function_coverage=1 00:38:07.195 --rc genhtml_legend=1 00:38:07.195 --rc geninfo_all_blocks=1 00:38:07.195 --rc geninfo_unexecuted_blocks=1 00:38:07.195 00:38:07.195 ' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:07.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.195 --rc genhtml_branch_coverage=1 00:38:07.195 --rc genhtml_function_coverage=1 00:38:07.195 --rc genhtml_legend=1 00:38:07.195 --rc geninfo_all_blocks=1 00:38:07.195 --rc geninfo_unexecuted_blocks=1 00:38:07.195 00:38:07.195 ' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:07.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:07.195 --rc genhtml_branch_coverage=1 00:38:07.195 --rc genhtml_function_coverage=1 00:38:07.195 --rc genhtml_legend=1 00:38:07.195 --rc geninfo_all_blocks=1 00:38:07.195 --rc geninfo_unexecuted_blocks=1 00:38:07.195 00:38:07.195 ' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:07.195 06:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:07.195 06:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:07.195 06:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:07.195 06:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:07.195 06:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:12.614 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:12.614 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:12.614 Found net devices under 0000:af:00.0: cvl_0_0 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:12.614 Found net devices under 0000:af:00.1: cvl_0_1 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:12.614 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:12.615 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:12.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:38:12.874 00:38:12.874 --- 10.0.0.2 ping statistics --- 00:38:12.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.874 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:38:12.874 00:38:12.874 --- 10.0.0.1 ping statistics --- 00:38:12.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.874 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1232029 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1232029 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1232029 ']' 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.874 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.875 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.875 06:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.875 [2024-12-15 06:28:32.897370] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:12.875 [2024-12-15 06:28:32.898317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:12.875 [2024-12-15 06:28:32.898355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.875 [2024-12-15 06:28:32.961459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:12.875 [2024-12-15 06:28:32.985239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.875 [2024-12-15 06:28:32.985275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.875 [2024-12-15 06:28:32.985282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.875 [2024-12-15 06:28:32.985288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.875 [2024-12-15 06:28:32.985293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.875 [2024-12-15 06:28:32.986579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:12.875 [2024-12-15 06:28:32.986691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:12.875 [2024-12-15 06:28:32.986809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.875 [2024-12-15 06:28:32.986810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:13.134 [2024-12-15 06:28:33.050619] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.134 [2024-12-15 06:28:33.051291] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.134 [2024-12-15 06:28:33.051638] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:13.134 [2024-12-15 06:28:33.052094] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:13.134 [2024-12-15 06:28:33.052126] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:13.134 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.135 [2024-12-15 06:28:33.127566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.135 Malloc0 00:38:13.135 [2024-12-15 06:28:33.215862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1232071 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1232071 /var/tmp/bdevperf.sock 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1232071 ']' 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:13.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:13.135 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:13.135 { 00:38:13.135 "params": { 00:38:13.135 "name": "Nvme$subsystem", 00:38:13.135 "trtype": "$TEST_TRANSPORT", 00:38:13.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:13.135 "adrfam": "ipv4", 00:38:13.135 "trsvcid": "$NVMF_PORT", 00:38:13.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:13.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:13.135 "hdgst": ${hdgst:-false}, 00:38:13.135 "ddgst": ${ddgst:-false} 00:38:13.135 }, 00:38:13.135 "method": "bdev_nvme_attach_controller" 00:38:13.135 } 00:38:13.135 EOF 00:38:13.135 )") 00:38:13.394 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:13.394 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:13.394 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:13.394 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:13.394 "params": { 00:38:13.394 "name": "Nvme0", 00:38:13.394 "trtype": "tcp", 00:38:13.394 "traddr": "10.0.0.2", 00:38:13.394 "adrfam": "ipv4", 00:38:13.394 "trsvcid": "4420", 00:38:13.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:13.394 "hdgst": false, 00:38:13.394 "ddgst": false 00:38:13.394 }, 00:38:13.394 "method": "bdev_nvme_attach_controller" 00:38:13.394 }' 00:38:13.394 [2024-12-15 06:28:33.315874] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:13.394 [2024-12-15 06:28:33.315923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232071 ] 00:38:13.394 [2024-12-15 06:28:33.388778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.394 [2024-12-15 06:28:33.411037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.653 Running I/O for 10 seconds... 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.653 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=103 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 103 -ge 100 ']' 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.914 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.914 [2024-12-15 06:28:33.831293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.831479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1857240 is same with the state(6) to be set 00:38:13.914 [2024-12-15 06:28:33.834705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.914 [2024-12-15 06:28:33.834952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.914 [2024-12-15 06:28:33.834959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.834967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.834974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.834983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.834990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.915 [2024-12-15 06:28:33.835534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.915 [2024-12-15 06:28:33.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.835723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.916 [2024-12-15 06:28:33.835731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:13.916 [2024-12-15 06:28:33.836666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.916 task offset: 24576 on job bdev=Nvme0n1 fails 00:38:13.916 00:38:13.916 Latency(us) 00:38:13.916 [2024-12-15T05:28:34.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:13.916 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:13.916 Verification LBA range: start 0x0 length 0x400 00:38:13.916 Nvme0n1 : 0.11 1749.73 109.36 583.24 0.00 25309.17 1685.21 26963.38 00:38:13.916 [2024-12-15T05:28:34.056Z] =================================================================================================================== 00:38:13.916 [2024-12-15T05:28:34.056Z] Total : 1749.73 109.36 583.24 0.00 25309.17 1685.21 26963.38 00:38:13.916 [2024-12-15 06:28:33.839014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:13.916 [2024-12-15 06:28:33.839037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b490 (9): Bad file descriptor 00:38:13.916 [2024-12-15 06:28:33.839835] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:38:13.916 [2024-12-15 06:28:33.839904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:13.916 [2024-12-15 06:28:33.839926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.916 [2024-12-15 06:28:33.839942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:38:13.916 [2024-12-15 06:28:33.839949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:38:13.916 [2024-12-15 06:28:33.839957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.916 [2024-12-15 06:28:33.839964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf9b490 00:38:13.916 [2024-12-15 06:28:33.839982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b490 (9): Bad file descriptor 00:38:13.916 [2024-12-15 06:28:33.840004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:13.916 [2024-12-15 06:28:33.840013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:13.916 [2024-12-15 06:28:33.840021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:13.916 [2024-12-15 06:28:33.840029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.916 06:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1232071 00:38:14.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1232071) - No such process 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.853 { 00:38:14.853 "params": { 00:38:14.853 "name": "Nvme$subsystem", 00:38:14.853 "trtype": "$TEST_TRANSPORT", 00:38:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.853 "adrfam": "ipv4", 00:38:14.853 "trsvcid": "$NVMF_PORT", 00:38:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.853 "hdgst": ${hdgst:-false}, 00:38:14.853 "ddgst": ${ddgst:-false} 00:38:14.853 }, 00:38:14.853 "method": "bdev_nvme_attach_controller" 00:38:14.853 } 00:38:14.853 EOF 00:38:14.853 )") 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:14.853 06:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.853 "params": { 00:38:14.853 "name": "Nvme0", 00:38:14.853 "trtype": "tcp", 00:38:14.853 "traddr": "10.0.0.2", 00:38:14.853 "adrfam": "ipv4", 00:38:14.853 "trsvcid": "4420", 00:38:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.853 "hdgst": false, 00:38:14.853 "ddgst": false 00:38:14.853 }, 00:38:14.853 "method": "bdev_nvme_attach_controller" 00:38:14.853 }' 00:38:14.853 [2024-12-15 06:28:34.903414] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:14.853 [2024-12-15 06:28:34.903465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232309 ] 00:38:14.853 [2024-12-15 06:28:34.979462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.114 [2024-12-15 06:28:35.000752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.114 Running I/O for 1 seconds... 00:38:16.050 1984.00 IOPS, 124.00 MiB/s 00:38:16.050 Latency(us) 00:38:16.050 [2024-12-15T05:28:36.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.050 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:16.050 Verification LBA range: start 0x0 length 0x400 00:38:16.050 Nvme0n1 : 1.02 2017.60 126.10 0.00 0.00 31231.95 7052.92 27213.04 00:38:16.050 [2024-12-15T05:28:36.190Z] =================================================================================================================== 00:38:16.050 [2024-12-15T05:28:36.190Z] Total : 2017.60 126.10 0.00 0.00 31231.95 7052.92 27213.04 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:16.309 rmmod nvme_tcp 00:38:16.309 rmmod nvme_fabrics 00:38:16.309 rmmod nvme_keyring 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1232029 ']' 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1232029 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1232029 ']' 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1232029 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:16.309 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232029 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232029' 00:38:16.568 killing process with pid 1232029 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1232029 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1232029 00:38:16.568 [2024-12-15 06:28:36.607124] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.568 06:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:19.102 00:38:19.102 real 0m11.921s 00:38:19.102 user 0m16.429s 00:38:19.102 sys 0m6.041s 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:19.102 ************************************ 00:38:19.102 END TEST nvmf_host_management 00:38:19.102 ************************************ 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:19.102 ************************************ 00:38:19.102 START TEST nvmf_lvol 00:38:19.102 ************************************ 00:38:19.102 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:19.102 * Looking for test storage... 00:38:19.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:19.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.103 --rc genhtml_branch_coverage=1 00:38:19.103 --rc genhtml_function_coverage=1 00:38:19.103 --rc genhtml_legend=1 00:38:19.103 --rc geninfo_all_blocks=1 00:38:19.103 --rc geninfo_unexecuted_blocks=1 00:38:19.103 00:38:19.103 ' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:19.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.103 --rc genhtml_branch_coverage=1 00:38:19.103 --rc genhtml_function_coverage=1 00:38:19.103 --rc genhtml_legend=1 00:38:19.103 --rc geninfo_all_blocks=1 00:38:19.103 --rc geninfo_unexecuted_blocks=1 00:38:19.103 00:38:19.103 ' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:19.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.103 --rc genhtml_branch_coverage=1 00:38:19.103 --rc genhtml_function_coverage=1 00:38:19.103 --rc genhtml_legend=1 00:38:19.103 --rc geninfo_all_blocks=1 00:38:19.103 --rc geninfo_unexecuted_blocks=1 00:38:19.103 00:38:19.103 ' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:19.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.103 --rc genhtml_branch_coverage=1 00:38:19.103 --rc genhtml_function_coverage=1 00:38:19.103 --rc genhtml_legend=1 00:38:19.103 --rc geninfo_all_blocks=1 00:38:19.103 --rc geninfo_unexecuted_blocks=1 00:38:19.103 00:38:19.103 ' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:19.103 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:19.104 06:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:25.674 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:25.674 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:25.674 Found net devices under 0000:af:00.0: cvl_0_0 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:25.674 Found net devices under 0000:af:00.1: cvl_0_1 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:25.674 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:25.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:38:25.675 00:38:25.675 --- 10.0.0.2 ping statistics --- 00:38:25.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.675 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:38:25.675 00:38:25.675 --- 10.0.0.1 ping statistics --- 00:38:25.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.675 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1236000 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1236000 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1236000 ']' 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:25.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.675 06:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:25.675 [2024-12-15 06:28:44.883969] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:25.675 [2024-12-15 06:28:44.884879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:25.675 [2024-12-15 06:28:44.884911] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:25.675 [2024-12-15 06:28:44.966338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:25.675 [2024-12-15 06:28:44.988665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:25.675 [2024-12-15 06:28:44.988701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:25.675 [2024-12-15 06:28:44.988709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:25.675 [2024-12-15 06:28:44.988715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:25.675 [2024-12-15 06:28:44.988720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:25.675 [2024-12-15 06:28:44.989883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:25.675 [2024-12-15 06:28:44.989989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.675 [2024-12-15 06:28:44.990013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.675 [2024-12-15 06:28:45.053248] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:25.675 [2024-12-15 06:28:45.054079] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:25.675 [2024-12-15 06:28:45.054584] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:25.675 [2024-12-15 06:28:45.054646] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:25.675 [2024-12-15 06:28:45.286743] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:25.675 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:25.934 06:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:26.193 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4285377f-55cc-4a99-bb0c-562077ea0b95 00:38:26.193 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4285377f-55cc-4a99-bb0c-562077ea0b95 lvol 20 00:38:26.452 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=874c356f-dc97-427d-a8d1-99549a5058d3 00:38:26.452 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:26.452 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 874c356f-dc97-427d-a8d1-99549a5058d3 00:38:26.710 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.969 [2024-12-15 06:28:46.906649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.969 06:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:27.228 06:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1236438 00:38:27.228 06:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:27.228 06:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:28.164 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 874c356f-dc97-427d-a8d1-99549a5058d3 MY_SNAPSHOT 00:38:28.423 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d22363dd-b3c7-43c9-9aba-1321de40c7c0 00:38:28.423 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 874c356f-dc97-427d-a8d1-99549a5058d3 30 00:38:28.682 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d22363dd-b3c7-43c9-9aba-1321de40c7c0 MY_CLONE 00:38:28.941 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=54fa9936-1d49-4d7b-90c9-13d1fef85d79 00:38:28.941 06:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 54fa9936-1d49-4d7b-90c9-13d1fef85d79 00:38:29.508 06:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1236438 00:38:37.625 Initializing NVMe Controllers 00:38:37.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:37.625 Controller IO queue size 128, less than required. 00:38:37.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:37.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:37.625 Initialization complete. Launching workers. 00:38:37.625 ======================================================== 00:38:37.625 Latency(us) 00:38:37.625 Device Information : IOPS MiB/s Average min max 00:38:37.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12042.48 47.04 10630.18 1551.05 65165.95 00:38:37.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12265.27 47.91 10436.29 3515.27 47707.02 00:38:37.625 ======================================================== 00:38:37.625 Total : 24307.76 94.95 10532.35 1551.05 65165.95 00:38:37.625 00:38:37.625 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:37.625 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 874c356f-dc97-427d-a8d1-99549a5058d3 00:38:37.883 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4285377f-55cc-4a99-bb0c-562077ea0b95 00:38:37.883 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:37.883 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:37.883 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:37.883 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:37.884 06:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:37.884 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.884 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:37.884 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.884 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.884 rmmod nvme_tcp 00:38:38.143 rmmod nvme_fabrics 00:38:38.143 rmmod nvme_keyring 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1236000 ']' 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1236000 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1236000 ']' 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1236000 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1236000 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1236000' 00:38:38.143 killing process with pid 1236000 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1236000 00:38:38.143 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1236000 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.402 06:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:40.307 00:38:40.307 real 0m21.624s 00:38:40.307 user 0m55.327s 00:38:40.307 sys 0m9.602s 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:40.307 ************************************ 00:38:40.307 END TEST nvmf_lvol 00:38:40.307 ************************************ 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.307 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:40.567 ************************************ 00:38:40.567 START TEST nvmf_lvs_grow 00:38:40.567 ************************************ 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:40.567 * Looking for test storage... 00:38:40.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.567 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:40.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.568 --rc genhtml_branch_coverage=1 00:38:40.568 --rc genhtml_function_coverage=1 00:38:40.568 --rc genhtml_legend=1 00:38:40.568 --rc geninfo_all_blocks=1 00:38:40.568 --rc geninfo_unexecuted_blocks=1 00:38:40.568 00:38:40.568 ' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:40.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.568 --rc genhtml_branch_coverage=1 00:38:40.568 --rc genhtml_function_coverage=1 00:38:40.568 --rc genhtml_legend=1 00:38:40.568 --rc geninfo_all_blocks=1 00:38:40.568 --rc geninfo_unexecuted_blocks=1 00:38:40.568 00:38:40.568 ' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:40.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.568 --rc genhtml_branch_coverage=1 00:38:40.568 --rc genhtml_function_coverage=1 00:38:40.568 --rc genhtml_legend=1 00:38:40.568 --rc geninfo_all_blocks=1 00:38:40.568 --rc geninfo_unexecuted_blocks=1 00:38:40.568 00:38:40.568 ' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:40.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.568 --rc genhtml_branch_coverage=1 00:38:40.568 --rc genhtml_function_coverage=1 00:38:40.568 --rc genhtml_legend=1 00:38:40.568 --rc geninfo_all_blocks=1 00:38:40.568 --rc geninfo_unexecuted_blocks=1 00:38:40.568 00:38:40.568 ' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:40.568 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.569 06:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.138 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:47.139 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:47.139 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:47.139 Found net devices under 0000:af:00.0: cvl_0_0 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:47.139 Found net devices under 0000:af:00.1: cvl_0_1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:38:47.139 00:38:47.139 --- 10.0.0.2 ping statistics --- 00:38:47.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.139 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:38:47.139 00:38:47.139 --- 10.0.0.1 ping statistics --- 00:38:47.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.139 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:47.139 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1241509 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1241509 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1241509 ']' 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:47.140 [2024-12-15 06:29:06.602333] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.140 [2024-12-15 06:29:06.603274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:47.140 [2024-12-15 06:29:06.603308] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.140 [2024-12-15 06:29:06.678679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.140 [2024-12-15 06:29:06.700076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.140 [2024-12-15 06:29:06.700113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.140 [2024-12-15 06:29:06.700120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.140 [2024-12-15 06:29:06.700126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.140 [2024-12-15 06:29:06.700131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.140 [2024-12-15 06:29:06.700612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.140 [2024-12-15 06:29:06.762729] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:47.140 [2024-12-15 06:29:06.762923] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.140 06:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:47.140 [2024-12-15 06:29:06.997269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:47.140 ************************************ 00:38:47.140 START TEST lvs_grow_clean 00:38:47.140 ************************************ 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:47.140 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:47.399 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:47.399 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:47.399 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d65d5acc-3347-4870-bdc1-4dd58421279e 00:38:47.400 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:38:47.400 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:47.658 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:47.658 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:47.658 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d65d5acc-3347-4870-bdc1-4dd58421279e lvol 150 00:38:47.917 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 00:38:47.917 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:47.917 06:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:47.917 [2024-12-15 06:29:08.040989] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:47.917 [2024-12-15 06:29:08.041143] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:47.917 true 00:38:48.176 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:38:48.176 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:48.176 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:48.176 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:48.434 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 00:38:48.693 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:48.693 [2024-12-15 06:29:08.789465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.693 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1241988 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1241988 /var/tmp/bdevperf.sock 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1241988 ']' 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:48.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.951 06:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:48.951 [2024-12-15 06:29:09.037847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:48.951 [2024-12-15 06:29:09.037894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241988 ] 00:38:49.210 [2024-12-15 06:29:09.114285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.210 [2024-12-15 06:29:09.136873] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.210 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.210 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:49.210 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:49.469 Nvme0n1 00:38:49.469 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:49.728 [ 00:38:49.728 { 00:38:49.728 "name": "Nvme0n1", 00:38:49.728 "aliases": [ 00:38:49.728 "c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50" 00:38:49.728 ], 00:38:49.728 "product_name": "NVMe disk", 00:38:49.728 "block_size": 4096, 00:38:49.728 "num_blocks": 38912, 00:38:49.728 "uuid": "c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50", 00:38:49.728 "numa_id": 1, 00:38:49.728 "assigned_rate_limits": { 00:38:49.728 "rw_ios_per_sec": 0, 00:38:49.728 "rw_mbytes_per_sec": 0, 00:38:49.728 "r_mbytes_per_sec": 0, 00:38:49.728 "w_mbytes_per_sec": 0 00:38:49.728 }, 00:38:49.728 "claimed": false, 00:38:49.728 "zoned": false, 00:38:49.728 "supported_io_types": { 00:38:49.728 "read": true, 00:38:49.728 "write": true, 00:38:49.728 "unmap": true, 00:38:49.728 "flush": true, 00:38:49.728 "reset": true, 00:38:49.728 "nvme_admin": true, 00:38:49.728 "nvme_io": true, 00:38:49.728 "nvme_io_md": false, 00:38:49.728 "write_zeroes": true, 00:38:49.728 "zcopy": false, 00:38:49.728 "get_zone_info": false, 00:38:49.728 "zone_management": false, 00:38:49.728 "zone_append": false, 00:38:49.728 "compare": true, 00:38:49.728 "compare_and_write": true, 00:38:49.728 "abort": true, 00:38:49.728 "seek_hole": false, 00:38:49.728 "seek_data": false, 00:38:49.728 "copy": true, 00:38:49.728 "nvme_iov_md": false 00:38:49.728 }, 00:38:49.728 "memory_domains": [ 00:38:49.728 { 00:38:49.728 "dma_device_id": "system", 00:38:49.728 "dma_device_type": 1 00:38:49.728 } 00:38:49.728 ], 00:38:49.728 "driver_specific": { 00:38:49.728 "nvme": [ 00:38:49.728 { 00:38:49.728 "trid": { 00:38:49.728 "trtype": "TCP", 00:38:49.728 "adrfam": "IPv4", 00:38:49.728 "traddr": "10.0.0.2", 00:38:49.728 "trsvcid": "4420", 00:38:49.728 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:49.728 }, 00:38:49.728 "ctrlr_data": { 00:38:49.728 "cntlid": 1, 00:38:49.728 "vendor_id": "0x8086", 00:38:49.728 "model_number": "SPDK bdev Controller", 00:38:49.728 "serial_number": "SPDK0", 00:38:49.728 "firmware_revision": "25.01", 00:38:49.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.728 "oacs": { 00:38:49.728 "security": 0, 00:38:49.728 "format": 0, 00:38:49.728 "firmware": 0, 00:38:49.728 "ns_manage": 0 00:38:49.728 }, 00:38:49.728 "multi_ctrlr": true, 00:38:49.728 "ana_reporting": false 00:38:49.728 }, 00:38:49.728 "vs": { 00:38:49.728 "nvme_version": "1.3" 00:38:49.728 }, 00:38:49.728 "ns_data": { 00:38:49.728 "id": 1, 00:38:49.728 "can_share": true 00:38:49.728 } 00:38:49.728 } 00:38:49.728 ], 00:38:49.728 "mp_policy": "active_passive" 00:38:49.728 } 00:38:49.728 } 00:38:49.728 ] 00:38:49.728 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1242003 00:38:49.728 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:49.728 06:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:49.728 Running I/O for 10 seconds... 00:38:50.664 Latency(us) 00:38:50.664 [2024-12-15T05:29:10.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.664 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:38:50.664 [2024-12-15T05:29:10.804Z] =================================================================================================================== 00:38:50.664 [2024-12-15T05:29:10.804Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:38:50.664 00:38:51.599 06:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:38:51.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:51.857 Nvme0n1 : 2.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:51.857 [2024-12-15T05:29:11.997Z] =================================================================================================================== 00:38:51.857 [2024-12-15T05:29:11.997Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:51.857 00:38:51.857 true 00:38:51.857 06:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:38:51.858 06:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:52.116 06:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:52.116 06:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:52.116 06:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1242003 00:38:52.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:52.683 Nvme0n1 : 3.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:38:52.683 [2024-12-15T05:29:12.823Z] =================================================================================================================== 00:38:52.683 [2024-12-15T05:29:12.823Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:38:52.683 00:38:53.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.618 Nvme0n1 : 4.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:53.618 [2024-12-15T05:29:13.758Z] =================================================================================================================== 00:38:53.618 [2024-12-15T05:29:13.758Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:38:53.618 00:38:54.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:54.993 Nvme0n1 : 5.00 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:38:54.993 [2024-12-15T05:29:15.133Z] =================================================================================================================== 00:38:54.993 [2024-12-15T05:29:15.133Z] Total : 23266.40 90.88 0.00 0.00 0.00 0.00 0.00 00:38:54.993 00:38:55.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.924 Nvme0n1 : 6.00 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:38:55.924 [2024-12-15T05:29:16.064Z] =================================================================================================================== 00:38:55.924 [2024-12-15T05:29:16.064Z] Total : 23325.67 91.12 0.00 0.00 0.00 0.00 0.00 00:38:55.924 00:38:56.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.858 Nvme0n1 : 7.00 23377.14 91.32 0.00 0.00 0.00 0.00 0.00 00:38:56.858 [2024-12-15T05:29:16.998Z] =================================================================================================================== 00:38:56.858 [2024-12-15T05:29:16.998Z] Total : 23377.14 91.32 0.00 0.00 0.00 0.00 0.00 00:38:56.858 00:38:57.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.792 Nvme0n1 : 8.00 23421.75 91.49 0.00 0.00 0.00 0.00 0.00 00:38:57.792 [2024-12-15T05:29:17.932Z] =================================================================================================================== 00:38:57.792 [2024-12-15T05:29:17.932Z] Total : 23421.75 91.49 0.00 0.00 0.00 0.00 0.00 00:38:57.792 00:38:58.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.727 Nvme0n1 : 9.00 23444.00 91.58 0.00 0.00 0.00 0.00 0.00 00:38:58.727 [2024-12-15T05:29:18.867Z] =================================================================================================================== 00:38:58.727 [2024-12-15T05:29:18.867Z] Total : 23444.00 91.58 0.00 0.00 0.00 0.00 0.00 00:38:58.727 00:38:59.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.662 Nvme0n1 : 10.00 23461.80 91.65 0.00 0.00 0.00 0.00 0.00 00:38:59.662 [2024-12-15T05:29:19.802Z] =================================================================================================================== 00:38:59.662 [2024-12-15T05:29:19.802Z] Total : 23461.80 91.65 0.00 0.00 0.00 0.00 0.00 00:38:59.662 00:38:59.662 00:38:59.662 Latency(us) 00:38:59.662 [2024-12-15T05:29:19.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.662 Nvme0n1 : 10.00 23464.10 91.66 0.00 0.00 5452.36 3120.76 27213.04 00:38:59.662 [2024-12-15T05:29:19.802Z] =================================================================================================================== 00:38:59.662 [2024-12-15T05:29:19.802Z] Total : 23464.10 91.66 0.00 0.00 5452.36 3120.76 27213.04 00:38:59.662 { 00:38:59.662 "results": [ 00:38:59.662 { 00:38:59.662 "job": "Nvme0n1", 00:38:59.662 "core_mask": "0x2", 00:38:59.662 "workload": "randwrite", 00:38:59.662 "status": "finished", 00:38:59.662 "queue_depth": 128, 00:38:59.662 "io_size": 4096, 00:38:59.662 "runtime": 10.004476, 00:38:59.662 "iops": 23464.09746997244, 00:38:59.662 "mibps": 91.65663074207984, 00:38:59.662 "io_failed": 0, 00:38:59.662 "io_timeout": 0, 00:38:59.662 "avg_latency_us": 5452.359677170827, 00:38:59.662 "min_latency_us": 3120.7619047619046, 00:38:59.662 "max_latency_us": 27213.04380952381 00:38:59.662 } 00:38:59.662 ], 00:38:59.662 "core_count": 1 00:38:59.662 } 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1241988 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1241988 ']' 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1241988 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.662 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1241988 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1241988' 00:38:59.921 killing process with pid 1241988 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1241988 00:38:59.921 Received shutdown signal, test time was about 10.000000 seconds 00:38:59.921 00:38:59.921 Latency(us) 00:38:59.921 [2024-12-15T05:29:20.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.921 [2024-12-15T05:29:20.061Z] =================================================================================================================== 00:38:59.921 [2024-12-15T05:29:20.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1241988 00:38:59.921 06:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:00.180 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:00.439 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:00.439 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:00.698 [2024-12-15 06:29:20.757079] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:00.698 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:00.956 request: 00:39:00.956 { 00:39:00.956 "uuid": "d65d5acc-3347-4870-bdc1-4dd58421279e", 00:39:00.956 "method": "bdev_lvol_get_lvstores", 00:39:00.956 "req_id": 1 00:39:00.956 } 00:39:00.956 Got JSON-RPC error response 00:39:00.956 response: 00:39:00.956 { 00:39:00.956 "code": -19, 00:39:00.956 "message": "No such device" 00:39:00.956 } 00:39:00.956 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:00.957 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:00.957 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:00.957 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:00.957 06:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:01.215 aio_bdev 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:01.215 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:01.474 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 -t 2000 00:39:01.474 [ 00:39:01.474 { 00:39:01.474 "name": "c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50", 00:39:01.474 "aliases": [ 00:39:01.474 "lvs/lvol" 00:39:01.474 ], 00:39:01.474 "product_name": "Logical Volume", 00:39:01.474 "block_size": 4096, 00:39:01.474 "num_blocks": 38912, 00:39:01.474 "uuid": "c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50", 00:39:01.474 "assigned_rate_limits": { 00:39:01.474 "rw_ios_per_sec": 0, 00:39:01.474 "rw_mbytes_per_sec": 0, 00:39:01.474 "r_mbytes_per_sec": 0, 00:39:01.474 "w_mbytes_per_sec": 0 00:39:01.474 }, 00:39:01.474 "claimed": false, 00:39:01.474 "zoned": false, 00:39:01.474 "supported_io_types": { 00:39:01.474 "read": true, 00:39:01.474 "write": true, 00:39:01.474 "unmap": true, 00:39:01.474 "flush": false, 00:39:01.474 "reset": true, 00:39:01.474 "nvme_admin": false, 00:39:01.474 "nvme_io": false, 00:39:01.474 "nvme_io_md": false, 00:39:01.474 "write_zeroes": true, 00:39:01.474 "zcopy": false, 00:39:01.474 "get_zone_info": false, 00:39:01.474 "zone_management": false, 00:39:01.474 "zone_append": false, 00:39:01.474 "compare": false, 00:39:01.474 "compare_and_write": false, 00:39:01.474 "abort": false, 00:39:01.474 "seek_hole": true, 00:39:01.474 "seek_data": true, 00:39:01.474 "copy": false, 00:39:01.474 "nvme_iov_md": false 00:39:01.474 }, 00:39:01.474 "driver_specific": { 00:39:01.474 "lvol": { 00:39:01.474 "lvol_store_uuid": "d65d5acc-3347-4870-bdc1-4dd58421279e", 00:39:01.474 "base_bdev": "aio_bdev", 00:39:01.474 "thin_provision": false, 00:39:01.474 "num_allocated_clusters": 38, 00:39:01.474 "snapshot": false, 00:39:01.474 "clone": false, 00:39:01.474 "esnap_clone": false 00:39:01.474 } 00:39:01.474 } 00:39:01.474 } 00:39:01.474 ] 00:39:01.474 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:01.474 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:01.474 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:01.733 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:01.733 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:01.733 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:01.992 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:01.992 06:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4abacf5-3a7c-4faa-9d6b-f1a10ef06e50 00:39:02.251 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d65d5acc-3347-4870-bdc1-4dd58421279e 00:39:02.251 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.510 00:39:02.510 real 0m15.519s 00:39:02.510 user 0m15.062s 00:39:02.510 sys 0m1.479s 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:02.510 ************************************ 00:39:02.510 END TEST lvs_grow_clean 00:39:02.510 ************************************ 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:02.510 ************************************ 00:39:02.510 START TEST lvs_grow_dirty 00:39:02.510 ************************************ 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:02.510 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:02.770 06:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:03.029 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:03.029 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:03.029 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:03.290 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:03.290 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:03.290 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86a3fbbb-7fab-43c5-ba74-247e03574299 lvol 150 00:39:03.627 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:03.627 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:03.627 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:03.627 [2024-12-15 06:29:23.653109] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:03.627 [2024-12-15 06:29:23.653240] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:03.627 true 00:39:03.627 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:03.627 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:03.954 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:03.954 06:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:03.954 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:04.213 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:04.473 [2024-12-15 06:29:24.397532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.473 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244485 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244485 /var/tmp/bdevperf.sock 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1244485 ']' 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:04.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:04.732 [2024-12-15 06:29:24.653745] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:04.732 [2024-12-15 06:29:24.653792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244485 ] 00:39:04.732 [2024-12-15 06:29:24.711162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.732 [2024-12-15 06:29:24.734122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:04.732 06:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:05.299 Nvme0n1 00:39:05.299 06:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:05.299 [ 00:39:05.299 { 00:39:05.299 "name": "Nvme0n1", 00:39:05.299 "aliases": [ 00:39:05.299 "bdec6fa5-f223-48fc-bba0-dc8867364121" 00:39:05.299 ], 00:39:05.299 "product_name": "NVMe disk", 00:39:05.299 "block_size": 4096, 00:39:05.299 "num_blocks": 38912, 00:39:05.299 "uuid": "bdec6fa5-f223-48fc-bba0-dc8867364121", 00:39:05.299 "numa_id": 1, 00:39:05.299 "assigned_rate_limits": { 00:39:05.299 "rw_ios_per_sec": 0, 00:39:05.299 "rw_mbytes_per_sec": 0, 00:39:05.299 "r_mbytes_per_sec": 0, 00:39:05.299 "w_mbytes_per_sec": 0 00:39:05.299 }, 00:39:05.299 "claimed": false, 00:39:05.299 "zoned": false, 00:39:05.299 "supported_io_types": { 00:39:05.299 "read": true, 00:39:05.299 "write": true, 00:39:05.299 "unmap": true, 00:39:05.299 "flush": true, 00:39:05.299 "reset": true, 00:39:05.299 "nvme_admin": true, 00:39:05.299 "nvme_io": true, 00:39:05.299 "nvme_io_md": false, 00:39:05.299 "write_zeroes": true, 00:39:05.299 "zcopy": false, 00:39:05.299 "get_zone_info": false, 00:39:05.299 "zone_management": false, 00:39:05.299 "zone_append": false, 00:39:05.299 "compare": true, 00:39:05.299 "compare_and_write": true, 00:39:05.299 "abort": true, 00:39:05.299 "seek_hole": false, 00:39:05.299 "seek_data": false, 00:39:05.299 "copy": true, 00:39:05.299 "nvme_iov_md": false 00:39:05.299 }, 00:39:05.299 "memory_domains": [ 00:39:05.299 { 00:39:05.299 "dma_device_id": "system", 00:39:05.299 "dma_device_type": 1 00:39:05.299 } 00:39:05.299 ], 00:39:05.299 "driver_specific": { 00:39:05.299 "nvme": [ 00:39:05.299 { 00:39:05.299 "trid": { 00:39:05.299 "trtype": "TCP", 00:39:05.299 "adrfam": "IPv4", 00:39:05.299 "traddr": "10.0.0.2", 00:39:05.299 "trsvcid": "4420", 00:39:05.299 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:05.299 }, 00:39:05.299 "ctrlr_data": { 00:39:05.299 "cntlid": 1, 00:39:05.299 "vendor_id": "0x8086", 00:39:05.299 "model_number": "SPDK bdev Controller", 00:39:05.299 "serial_number": "SPDK0", 00:39:05.299 "firmware_revision": "25.01", 00:39:05.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:05.299 "oacs": { 00:39:05.299 "security": 0, 00:39:05.299 "format": 0, 00:39:05.299 "firmware": 0, 00:39:05.299 "ns_manage": 0 00:39:05.299 }, 00:39:05.299 "multi_ctrlr": true, 00:39:05.299 "ana_reporting": false 00:39:05.299 }, 00:39:05.299 "vs": { 00:39:05.299 "nvme_version": "1.3" 00:39:05.299 }, 00:39:05.299 "ns_data": { 00:39:05.299 "id": 1, 00:39:05.299 "can_share": true 00:39:05.299 } 00:39:05.299 } 00:39:05.299 ], 00:39:05.299 "mp_policy": "active_passive" 00:39:05.299 } 00:39:05.299 } 00:39:05.299 ] 00:39:05.299 06:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:05.299 06:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1244517 00:39:05.299 06:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:05.557 Running I/O for 10 seconds... 00:39:06.494 Latency(us) 00:39:06.494 [2024-12-15T05:29:26.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:06.494 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:06.494 [2024-12-15T05:29:26.634Z] =================================================================================================================== 00:39:06.494 [2024-12-15T05:29:26.634Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:39:06.494 00:39:07.430 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:07.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.430 Nvme0n1 : 2.00 23146.00 90.41 0.00 0.00 0.00 0.00 0.00 00:39:07.430 [2024-12-15T05:29:27.570Z] =================================================================================================================== 00:39:07.430 [2024-12-15T05:29:27.570Z] Total : 23146.00 90.41 0.00 0.00 0.00 0.00 0.00 00:39:07.430 00:39:07.430 true 00:39:07.689 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:07.689 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:07.689 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:07.689 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:07.689 06:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1244517 00:39:08.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.626 Nvme0n1 : 3.00 23241.33 90.79 0.00 0.00 0.00 0.00 0.00 00:39:08.626 [2024-12-15T05:29:28.766Z] =================================================================================================================== 00:39:08.626 [2024-12-15T05:29:28.766Z] Total : 23241.33 90.79 0.00 0.00 0.00 0.00 0.00 00:39:08.626 00:39:09.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.562 Nvme0n1 : 4.00 23329.00 91.13 0.00 0.00 0.00 0.00 0.00 00:39:09.562 [2024-12-15T05:29:29.702Z] =================================================================================================================== 00:39:09.562 [2024-12-15T05:29:29.702Z] Total : 23329.00 91.13 0.00 0.00 0.00 0.00 0.00 00:39:09.562 00:39:10.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:10.497 Nvme0n1 : 5.00 23336.80 91.16 0.00 0.00 0.00 0.00 0.00 00:39:10.497 [2024-12-15T05:29:30.637Z] =================================================================================================================== 00:39:10.497 [2024-12-15T05:29:30.637Z] Total : 23336.80 91.16 0.00 0.00 0.00 0.00 0.00 00:39:10.497 00:39:11.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:11.434 Nvme0n1 : 6.00 23384.33 91.35 0.00 0.00 0.00 0.00 0.00 00:39:11.434 [2024-12-15T05:29:31.574Z] =================================================================================================================== 00:39:11.434 [2024-12-15T05:29:31.574Z] Total : 23384.33 91.35 0.00 0.00 0.00 0.00 0.00 00:39:11.434 00:39:12.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:12.369 Nvme0n1 : 7.00 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:39:12.369 [2024-12-15T05:29:32.509Z] =================================================================================================================== 00:39:12.369 [2024-12-15T05:29:32.509Z] Total : 23436.43 91.55 0.00 0.00 0.00 0.00 0.00 00:39:12.369 00:39:13.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:13.747 Nvme0n1 : 8.00 23459.62 91.64 0.00 0.00 0.00 0.00 0.00 00:39:13.747 [2024-12-15T05:29:33.887Z] =================================================================================================================== 00:39:13.747 [2024-12-15T05:29:33.887Z] Total : 23459.62 91.64 0.00 0.00 0.00 0.00 0.00 00:39:13.747 00:39:14.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:14.683 Nvme0n1 : 9.00 23491.78 91.76 0.00 0.00 0.00 0.00 0.00 00:39:14.683 [2024-12-15T05:29:34.823Z] =================================================================================================================== 00:39:14.683 [2024-12-15T05:29:34.823Z] Total : 23491.78 91.76 0.00 0.00 0.00 0.00 0.00 00:39:14.683 00:39:15.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:15.619 Nvme0n1 : 10.00 23517.50 91.87 0.00 0.00 0.00 0.00 0.00 00:39:15.619 [2024-12-15T05:29:35.759Z] =================================================================================================================== 00:39:15.619 [2024-12-15T05:29:35.759Z] Total : 23517.50 91.87 0.00 0.00 0.00 0.00 0.00 00:39:15.619 00:39:15.619 00:39:15.620 Latency(us) 00:39:15.620 [2024-12-15T05:29:35.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:15.620 Nvme0n1 : 10.00 23523.09 91.89 0.00 0.00 5438.48 2995.93 26838.55 00:39:15.620 [2024-12-15T05:29:35.760Z] =================================================================================================================== 00:39:15.620 [2024-12-15T05:29:35.760Z] Total : 23523.09 91.89 0.00 0.00 5438.48 2995.93 26838.55 00:39:15.620 { 00:39:15.620 "results": [ 00:39:15.620 { 00:39:15.620 "job": "Nvme0n1", 00:39:15.620 "core_mask": "0x2", 00:39:15.620 "workload": "randwrite", 00:39:15.620 "status": "finished", 00:39:15.620 "queue_depth": 128, 00:39:15.620 "io_size": 4096, 00:39:15.620 "runtime": 10.003065, 00:39:15.620 "iops": 23523.09017286202, 00:39:15.620 "mibps": 91.88707098774226, 00:39:15.620 "io_failed": 0, 00:39:15.620 "io_timeout": 0, 00:39:15.620 "avg_latency_us": 5438.478785258238, 00:39:15.620 "min_latency_us": 2995.9314285714286, 00:39:15.620 "max_latency_us": 26838.55238095238 00:39:15.620 } 00:39:15.620 ], 00:39:15.620 "core_count": 1 00:39:15.620 } 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244485 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1244485 ']' 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1244485 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244485 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244485' 00:39:15.620 killing process with pid 1244485 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1244485 00:39:15.620 Received shutdown signal, test time was about 10.000000 seconds 00:39:15.620 00:39:15.620 Latency(us) 00:39:15.620 [2024-12-15T05:29:35.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.620 [2024-12-15T05:29:35.760Z] =================================================================================================================== 00:39:15.620 [2024-12-15T05:29:35.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1244485 00:39:15.620 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:15.878 06:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:16.137 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:16.137 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1241509 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1241509 00:39:16.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1241509 Killed "${NVMF_APP[@]}" "$@" 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1246302 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1246302 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1246302 ']' 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.396 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:16.396 [2024-12-15 06:29:36.402039] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:16.396 [2024-12-15 06:29:36.402891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:16.396 [2024-12-15 06:29:36.402927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.396 [2024-12-15 06:29:36.466167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.396 [2024-12-15 06:29:36.487397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.396 [2024-12-15 06:29:36.487435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.396 [2024-12-15 06:29:36.487442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.396 [2024-12-15 06:29:36.487448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.396 [2024-12-15 06:29:36.487453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.397 [2024-12-15 06:29:36.487943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.656 [2024-12-15 06:29:36.549912] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.656 [2024-12-15 06:29:36.550124] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.656 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:16.656 [2024-12-15 06:29:36.785236] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:16.656 [2024-12-15 06:29:36.785450] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:16.656 [2024-12-15 06:29:36.785535] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:16.915 06:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:16.915 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdec6fa5-f223-48fc-bba0-dc8867364121 -t 2000 00:39:17.174 [ 00:39:17.174 { 00:39:17.174 "name": "bdec6fa5-f223-48fc-bba0-dc8867364121", 00:39:17.174 "aliases": [ 00:39:17.174 "lvs/lvol" 00:39:17.174 ], 00:39:17.174 "product_name": "Logical Volume", 00:39:17.174 "block_size": 4096, 00:39:17.174 "num_blocks": 38912, 00:39:17.174 "uuid": "bdec6fa5-f223-48fc-bba0-dc8867364121", 00:39:17.174 "assigned_rate_limits": { 00:39:17.174 "rw_ios_per_sec": 0, 00:39:17.174 "rw_mbytes_per_sec": 0, 00:39:17.174 "r_mbytes_per_sec": 0, 00:39:17.174 "w_mbytes_per_sec": 0 00:39:17.174 }, 00:39:17.174 "claimed": false, 00:39:17.174 "zoned": false, 00:39:17.174 "supported_io_types": { 00:39:17.174 "read": true, 00:39:17.174 "write": true, 00:39:17.174 "unmap": true, 00:39:17.174 "flush": false, 00:39:17.174 "reset": true, 00:39:17.174 "nvme_admin": false, 00:39:17.174 "nvme_io": false, 00:39:17.174 "nvme_io_md": false, 00:39:17.174 "write_zeroes": true, 00:39:17.174 "zcopy": false, 00:39:17.174 "get_zone_info": false, 00:39:17.174 "zone_management": false, 00:39:17.174 "zone_append": false, 00:39:17.174 "compare": false, 00:39:17.174 "compare_and_write": false, 00:39:17.174 "abort": false, 00:39:17.174 "seek_hole": true, 00:39:17.174 "seek_data": true, 00:39:17.174 "copy": false, 00:39:17.174 "nvme_iov_md": false 00:39:17.174 }, 00:39:17.174 "driver_specific": { 00:39:17.174 "lvol": { 00:39:17.174 "lvol_store_uuid": "86a3fbbb-7fab-43c5-ba74-247e03574299", 00:39:17.174 "base_bdev": "aio_bdev", 00:39:17.174 "thin_provision": false, 00:39:17.174 "num_allocated_clusters": 38, 00:39:17.174 "snapshot": false, 00:39:17.174 "clone": false, 00:39:17.174 "esnap_clone": false 00:39:17.174 } 00:39:17.174 } 00:39:17.174 } 00:39:17.174 ] 00:39:17.174 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:17.174 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:17.174 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:17.433 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:17.433 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:17.433 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:17.692 [2024-12-15 06:29:37.748402] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:17.692 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:17.951 request: 00:39:17.951 { 00:39:17.951 "uuid": "86a3fbbb-7fab-43c5-ba74-247e03574299", 00:39:17.951 "method": "bdev_lvol_get_lvstores", 00:39:17.951 "req_id": 1 00:39:17.951 } 00:39:17.951 Got JSON-RPC error response 00:39:17.951 response: 00:39:17.951 { 00:39:17.951 "code": -19, 00:39:17.951 "message": "No such device" 00:39:17.951 } 00:39:17.951 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:17.951 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:17.951 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:17.951 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:17.951 06:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:18.210 aio_bdev 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:18.210 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:18.469 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bdec6fa5-f223-48fc-bba0-dc8867364121 -t 2000 00:39:18.469 [ 00:39:18.469 { 00:39:18.469 "name": "bdec6fa5-f223-48fc-bba0-dc8867364121", 00:39:18.469 "aliases": [ 00:39:18.469 "lvs/lvol" 00:39:18.469 ], 00:39:18.469 "product_name": "Logical Volume", 00:39:18.469 "block_size": 4096, 00:39:18.469 "num_blocks": 38912, 00:39:18.469 "uuid": "bdec6fa5-f223-48fc-bba0-dc8867364121", 00:39:18.469 "assigned_rate_limits": { 00:39:18.469 "rw_ios_per_sec": 0, 00:39:18.469 "rw_mbytes_per_sec": 0, 00:39:18.469 "r_mbytes_per_sec": 0, 00:39:18.469 "w_mbytes_per_sec": 0 00:39:18.469 }, 00:39:18.469 "claimed": false, 00:39:18.469 "zoned": false, 00:39:18.469 "supported_io_types": { 00:39:18.469 "read": true, 00:39:18.469 "write": true, 00:39:18.469 "unmap": true, 00:39:18.469 "flush": false, 00:39:18.469 "reset": true, 00:39:18.469 "nvme_admin": false, 00:39:18.469 "nvme_io": false, 00:39:18.469 "nvme_io_md": false, 00:39:18.469 "write_zeroes": true, 00:39:18.469 "zcopy": false, 00:39:18.469 "get_zone_info": false, 00:39:18.469 "zone_management": false, 00:39:18.469 "zone_append": false, 00:39:18.469 "compare": false, 00:39:18.469 "compare_and_write": false, 00:39:18.469 "abort": false, 00:39:18.469 "seek_hole": true, 00:39:18.469 "seek_data": true, 00:39:18.469 "copy": false, 00:39:18.469 "nvme_iov_md": false 00:39:18.469 }, 00:39:18.469 "driver_specific": { 00:39:18.469 "lvol": { 00:39:18.469 "lvol_store_uuid": "86a3fbbb-7fab-43c5-ba74-247e03574299", 00:39:18.469 "base_bdev": "aio_bdev", 00:39:18.469 "thin_provision": false, 00:39:18.469 "num_allocated_clusters": 38, 00:39:18.469 "snapshot": false, 00:39:18.469 "clone": false, 00:39:18.469 "esnap_clone": false 00:39:18.469 } 00:39:18.469 } 00:39:18.469 } 00:39:18.469 ] 00:39:18.469 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:18.469 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:18.469 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:18.728 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:18.728 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:18.728 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:18.987 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:18.987 06:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bdec6fa5-f223-48fc-bba0-dc8867364121 00:39:19.247 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86a3fbbb-7fab-43c5-ba74-247e03574299 00:39:19.247 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:19.506 00:39:19.506 real 0m16.927s 00:39:19.506 user 0m34.356s 00:39:19.506 sys 0m3.779s 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:19.506 ************************************ 00:39:19.506 END TEST lvs_grow_dirty 00:39:19.506 ************************************ 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:19.506 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:19.506 nvmf_trace.0 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.765 rmmod nvme_tcp 00:39:19.765 rmmod nvme_fabrics 00:39:19.765 rmmod nvme_keyring 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1246302 ']' 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1246302 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1246302 ']' 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1246302 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246302 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246302' 00:39:19.765 killing process with pid 1246302 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1246302 00:39:19.765 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1246302 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.025 06:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.929 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.929 00:39:21.929 real 0m41.553s 00:39:21.929 user 0m51.882s 00:39:21.929 sys 0m10.096s 00:39:21.929 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.929 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:21.929 ************************************ 00:39:21.929 END TEST nvmf_lvs_grow 00:39:21.929 ************************************ 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.189 ************************************ 00:39:22.189 START TEST nvmf_bdev_io_wait 00:39:22.189 ************************************ 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:22.189 * Looking for test storage... 00:39:22.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.189 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:22.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.190 --rc genhtml_branch_coverage=1 00:39:22.190 --rc genhtml_function_coverage=1 00:39:22.190 --rc genhtml_legend=1 00:39:22.190 --rc geninfo_all_blocks=1 00:39:22.190 --rc geninfo_unexecuted_blocks=1 00:39:22.190 00:39:22.190 ' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:22.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.190 --rc genhtml_branch_coverage=1 00:39:22.190 --rc genhtml_function_coverage=1 00:39:22.190 --rc genhtml_legend=1 00:39:22.190 --rc geninfo_all_blocks=1 00:39:22.190 --rc geninfo_unexecuted_blocks=1 00:39:22.190 00:39:22.190 ' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:22.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.190 --rc genhtml_branch_coverage=1 00:39:22.190 --rc genhtml_function_coverage=1 00:39:22.190 --rc genhtml_legend=1 00:39:22.190 --rc geninfo_all_blocks=1 00:39:22.190 --rc geninfo_unexecuted_blocks=1 00:39:22.190 00:39:22.190 ' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:22.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.190 --rc genhtml_branch_coverage=1 00:39:22.190 --rc genhtml_function_coverage=1 00:39:22.190 --rc genhtml_legend=1 00:39:22.190 --rc geninfo_all_blocks=1 00:39:22.190 --rc geninfo_unexecuted_blocks=1 00:39:22.190 00:39:22.190 ' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:22.190 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:22.191 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:22.191 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.191 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.191 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.450 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:22.450 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:22.450 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:22.450 06:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:29.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.021 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:29.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:29.022 Found net devices under 0000:af:00.0: cvl_0_0 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:29.022 Found net devices under 0000:af:00.1: cvl_0_1 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.022 06:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:39:29.022 00:39:29.022 --- 10.0.0.2 ping statistics --- 00:39:29.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.022 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:39:29.022 00:39:29.022 --- 10.0.0.1 ping statistics --- 00:39:29.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.022 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1250272 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1250272 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1250272 ']' 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.022 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.022 [2024-12-15 06:29:48.247414] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:29.022 [2024-12-15 06:29:48.248317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:29.022 [2024-12-15 06:29:48.248350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.022 [2024-12-15 06:29:48.324259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:29.022 [2024-12-15 06:29:48.347875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.022 [2024-12-15 06:29:48.347913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.022 [2024-12-15 06:29:48.347920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.022 [2024-12-15 06:29:48.347927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.022 [2024-12-15 06:29:48.347932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.022 [2024-12-15 06:29:48.349226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.022 [2024-12-15 06:29:48.349333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:29.022 [2024-12-15 06:29:48.349448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.022 [2024-12-15 06:29:48.349449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:29.023 [2024-12-15 06:29:48.349706] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 [2024-12-15 06:29:48.485093] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:29.023 [2024-12-15 06:29:48.485971] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:29.023 [2024-12-15 06:29:48.485978] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:29.023 [2024-12-15 06:29:48.486131] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 [2024-12-15 06:29:48.498136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 Malloc0 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:29.023 [2024-12-15 06:29:48.574306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1250300 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1250302 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:29.023 { 00:39:29.023 "params": { 00:39:29.023 "name": "Nvme$subsystem", 00:39:29.023 "trtype": "$TEST_TRANSPORT", 00:39:29.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:29.023 "adrfam": "ipv4", 00:39:29.023 "trsvcid": "$NVMF_PORT", 00:39:29.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:29.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:29.023 "hdgst": ${hdgst:-false}, 00:39:29.023 "ddgst": ${ddgst:-false} 00:39:29.023 }, 00:39:29.023 "method": "bdev_nvme_attach_controller" 00:39:29.023 } 00:39:29.023 EOF 00:39:29.023 )") 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1250304 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:29.023 { 00:39:29.023 "params": { 00:39:29.023 "name": "Nvme$subsystem", 00:39:29.023 "trtype": "$TEST_TRANSPORT", 00:39:29.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:29.023 "adrfam": "ipv4", 00:39:29.023 "trsvcid": "$NVMF_PORT", 00:39:29.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:29.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:29.023 "hdgst": ${hdgst:-false}, 00:39:29.023 "ddgst": ${ddgst:-false} 00:39:29.023 }, 00:39:29.023 "method": "bdev_nvme_attach_controller" 00:39:29.023 } 00:39:29.023 EOF 00:39:29.023 )") 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1250307 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:29.023 { 00:39:29.023 "params": { 00:39:29.023 "name": "Nvme$subsystem", 00:39:29.023 "trtype": "$TEST_TRANSPORT", 00:39:29.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:29.023 "adrfam": "ipv4", 00:39:29.023 "trsvcid": "$NVMF_PORT", 00:39:29.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:29.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:29.023 "hdgst": ${hdgst:-false}, 00:39:29.023 "ddgst": ${ddgst:-false} 00:39:29.023 }, 00:39:29.023 "method": "bdev_nvme_attach_controller" 00:39:29.023 } 00:39:29.023 EOF 00:39:29.023 )") 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:29.023 { 00:39:29.023 "params": { 00:39:29.023 "name": "Nvme$subsystem", 00:39:29.023 "trtype": "$TEST_TRANSPORT", 00:39:29.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:29.023 "adrfam": "ipv4", 00:39:29.023 "trsvcid": "$NVMF_PORT", 00:39:29.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:29.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:29.023 "hdgst": ${hdgst:-false}, 00:39:29.023 "ddgst": ${ddgst:-false} 00:39:29.023 }, 00:39:29.023 "method": "bdev_nvme_attach_controller" 00:39:29.023 } 00:39:29.023 EOF 00:39:29.023 )") 00:39:29.023 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1250300 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:29.024 "params": { 00:39:29.024 "name": "Nvme1", 00:39:29.024 "trtype": "tcp", 00:39:29.024 "traddr": "10.0.0.2", 00:39:29.024 "adrfam": "ipv4", 00:39:29.024 "trsvcid": "4420", 00:39:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:29.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:29.024 "hdgst": false, 00:39:29.024 "ddgst": false 00:39:29.024 }, 00:39:29.024 "method": "bdev_nvme_attach_controller" 00:39:29.024 }' 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:29.024 "params": { 00:39:29.024 "name": "Nvme1", 00:39:29.024 "trtype": "tcp", 00:39:29.024 "traddr": "10.0.0.2", 00:39:29.024 "adrfam": "ipv4", 00:39:29.024 "trsvcid": "4420", 00:39:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:29.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:29.024 "hdgst": false, 00:39:29.024 "ddgst": false 00:39:29.024 }, 00:39:29.024 "method": "bdev_nvme_attach_controller" 00:39:29.024 }' 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:29.024 "params": { 00:39:29.024 "name": "Nvme1", 00:39:29.024 "trtype": "tcp", 00:39:29.024 "traddr": "10.0.0.2", 00:39:29.024 "adrfam": "ipv4", 00:39:29.024 "trsvcid": "4420", 00:39:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:29.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:29.024 "hdgst": false, 00:39:29.024 "ddgst": false 00:39:29.024 }, 00:39:29.024 "method": "bdev_nvme_attach_controller" 00:39:29.024 }' 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:29.024 06:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:29.024 "params": { 00:39:29.024 "name": "Nvme1", 00:39:29.024 "trtype": "tcp", 00:39:29.024 "traddr": "10.0.0.2", 00:39:29.024 "adrfam": "ipv4", 00:39:29.024 "trsvcid": "4420", 00:39:29.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:29.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:29.024 "hdgst": false, 00:39:29.024 "ddgst": false 00:39:29.024 }, 00:39:29.024 "method": "bdev_nvme_attach_controller" 00:39:29.024 }' 00:39:29.024 [2024-12-15 06:29:48.624286] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:29.024 [2024-12-15 06:29:48.624330] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:29.024 [2024-12-15 06:29:48.626404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:29.024 [2024-12-15 06:29:48.626455] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:29.024 [2024-12-15 06:29:48.628329] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:29.024 [2024-12-15 06:29:48.628370] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:29.024 [2024-12-15 06:29:48.631117] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:29.024 [2024-12-15 06:29:48.631160] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:29.024 [2024-12-15 06:29:48.819535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.024 [2024-12-15 06:29:48.836848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:29.024 [2024-12-15 06:29:48.904096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.024 [2024-12-15 06:29:48.921416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:29.024 [2024-12-15 06:29:49.011608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.024 [2024-12-15 06:29:49.032533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:29.024 [2024-12-15 06:29:49.071530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.024 [2024-12-15 06:29:49.087746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:29.282 Running I/O for 1 seconds... 00:39:29.282 Running I/O for 1 seconds... 00:39:29.282 Running I/O for 1 seconds... 00:39:29.282 Running I/O for 1 seconds... 00:39:30.216 12548.00 IOPS, 49.02 MiB/s 00:39:30.216 Latency(us) 00:39:30.216 [2024-12-15T05:29:50.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.216 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:30.216 Nvme1n1 : 1.01 12587.21 49.17 0.00 0.00 10131.78 3120.76 13356.86 00:39:30.216 [2024-12-15T05:29:50.356Z] =================================================================================================================== 00:39:30.216 [2024-12-15T05:29:50.356Z] Total : 12587.21 49.17 0.00 0.00 10131.78 3120.76 13356.86 00:39:30.216 10132.00 IOPS, 39.58 MiB/s 00:39:30.216 Latency(us) 00:39:30.216 [2024-12-15T05:29:50.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.216 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:30.216 Nvme1n1 : 1.01 10205.45 39.87 0.00 0.00 12501.49 1646.20 15354.15 00:39:30.216 [2024-12-15T05:29:50.356Z] =================================================================================================================== 00:39:30.216 [2024-12-15T05:29:50.356Z] Total : 10205.45 39.87 0.00 0.00 12501.49 1646.20 15354.15 00:39:30.216 242336.00 IOPS, 946.62 MiB/s 00:39:30.216 Latency(us) 00:39:30.216 [2024-12-15T05:29:50.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.216 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:30.216 Nvme1n1 : 1.00 241967.74 945.19 0.00 0.00 526.19 222.35 1497.97 00:39:30.216 [2024-12-15T05:29:50.356Z] =================================================================================================================== 00:39:30.216 [2024-12-15T05:29:50.356Z] Total : 241967.74 945.19 0.00 0.00 526.19 222.35 1497.97 00:39:30.482 11632.00 IOPS, 45.44 MiB/s 00:39:30.482 Latency(us) 00:39:30.482 [2024-12-15T05:29:50.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:30.482 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:30.482 Nvme1n1 : 1.00 11723.08 45.79 0.00 0.00 10894.91 2012.89 16976.94 00:39:30.482 [2024-12-15T05:29:50.622Z] =================================================================================================================== 00:39:30.482 [2024-12-15T05:29:50.622Z] Total : 11723.08 45.79 0.00 0.00 10894.91 2012.89 16976.94 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1250302 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1250304 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1250307 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.482 rmmod nvme_tcp 00:39:30.482 rmmod nvme_fabrics 00:39:30.482 rmmod nvme_keyring 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1250272 ']' 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1250272 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1250272 ']' 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1250272 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:30.482 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250272 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250272' 00:39:30.745 killing process with pid 1250272 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1250272 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1250272 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.745 06:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:33.281 00:39:33.281 real 0m10.732s 00:39:33.281 user 0m15.097s 00:39:33.281 sys 0m6.402s 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:33.281 ************************************ 00:39:33.281 END TEST nvmf_bdev_io_wait 00:39:33.281 ************************************ 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:33.281 ************************************ 00:39:33.281 START TEST nvmf_queue_depth 00:39:33.281 ************************************ 00:39:33.281 06:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:33.281 * Looking for test storage... 00:39:33.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:33.281 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.282 --rc genhtml_branch_coverage=1 00:39:33.282 --rc genhtml_function_coverage=1 00:39:33.282 --rc genhtml_legend=1 00:39:33.282 --rc geninfo_all_blocks=1 00:39:33.282 --rc geninfo_unexecuted_blocks=1 00:39:33.282 00:39:33.282 ' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.282 --rc genhtml_branch_coverage=1 00:39:33.282 --rc genhtml_function_coverage=1 00:39:33.282 --rc genhtml_legend=1 00:39:33.282 --rc geninfo_all_blocks=1 00:39:33.282 --rc geninfo_unexecuted_blocks=1 00:39:33.282 00:39:33.282 ' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.282 --rc genhtml_branch_coverage=1 00:39:33.282 --rc genhtml_function_coverage=1 00:39:33.282 --rc genhtml_legend=1 00:39:33.282 --rc geninfo_all_blocks=1 00:39:33.282 --rc geninfo_unexecuted_blocks=1 00:39:33.282 00:39:33.282 ' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.282 --rc genhtml_branch_coverage=1 00:39:33.282 --rc genhtml_function_coverage=1 00:39:33.282 --rc genhtml_legend=1 00:39:33.282 --rc geninfo_all_blocks=1 00:39:33.282 --rc geninfo_unexecuted_blocks=1 00:39:33.282 00:39:33.282 ' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:33.282 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:33.283 06:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:38.557 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.557 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:38.558 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:38.558 Found net devices under 0000:af:00.0: cvl_0_0 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:38.558 Found net devices under 0000:af:00.1: cvl_0_1 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:38.558 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:38.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:38.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:39:38.817 00:39:38.817 --- 10.0.0.2 ping statistics --- 00:39:38.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.817 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:38.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:38.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:39:38.817 00:39:38.817 --- 10.0.0.1 ping statistics --- 00:39:38.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:38.817 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:38.817 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1254016 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1254016 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254016 ']' 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.077 06:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.077 [2024-12-15 06:29:59.006019] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:39.077 [2024-12-15 06:29:59.006986] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:39.077 [2024-12-15 06:29:59.007033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.077 [2024-12-15 06:29:59.090510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.077 [2024-12-15 06:29:59.111616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.077 [2024-12-15 06:29:59.111653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.077 [2024-12-15 06:29:59.111660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.077 [2024-12-15 06:29:59.111666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.077 [2024-12-15 06:29:59.111671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.077 [2024-12-15 06:29:59.112137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.077 [2024-12-15 06:29:59.173897] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:39.077 [2024-12-15 06:29:59.174115] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:39.077 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.077 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:39.077 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:39.077 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:39.077 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 [2024-12-15 06:29:59.252872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 Malloc0 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.336 [2024-12-15 06:29:59.328955] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.336 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1254210 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1254210 /var/tmp/bdevperf.sock 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254210 ']' 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:39.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.337 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.337 [2024-12-15 06:29:59.382263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:39.337 [2024-12-15 06:29:59.382310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254210 ] 00:39:39.337 [2024-12-15 06:29:59.456403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.596 [2024-12-15 06:29:59.479690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.596 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.596 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:39.596 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:39.596 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.596 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.855 NVMe0n1 00:39:39.855 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.855 06:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:39.855 Running I/O for 10 seconds... 00:39:41.744 11922.00 IOPS, 46.57 MiB/s [2024-12-15T05:30:03.260Z] 12277.50 IOPS, 47.96 MiB/s [2024-12-15T05:30:04.195Z] 12290.00 IOPS, 48.01 MiB/s [2024-12-15T05:30:05.131Z] 12379.00 IOPS, 48.36 MiB/s [2024-12-15T05:30:06.064Z] 12423.20 IOPS, 48.53 MiB/s [2024-12-15T05:30:07.002Z] 12462.67 IOPS, 48.68 MiB/s [2024-12-15T05:30:07.939Z] 12536.29 IOPS, 48.97 MiB/s [2024-12-15T05:30:09.316Z] 12544.88 IOPS, 49.00 MiB/s [2024-12-15T05:30:09.884Z] 12548.89 IOPS, 49.02 MiB/s [2024-12-15T05:30:10.143Z] 12586.70 IOPS, 49.17 MiB/s 00:39:50.003 Latency(us) 00:39:50.003 [2024-12-15T05:30:10.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.003 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:50.003 Verification LBA range: start 0x0 length 0x4000 00:39:50.003 NVMe0n1 : 10.11 12549.89 49.02 0.00 0.00 81011.35 18599.74 68906.42 00:39:50.003 [2024-12-15T05:30:10.143Z] =================================================================================================================== 00:39:50.003 [2024-12-15T05:30:10.143Z] Total : 12549.89 49.02 0.00 0.00 81011.35 18599.74 68906.42 00:39:50.003 { 00:39:50.003 "results": [ 00:39:50.003 { 00:39:50.003 "job": "NVMe0n1", 00:39:50.003 "core_mask": "0x1", 00:39:50.003 "workload": "verify", 00:39:50.003 "status": "finished", 00:39:50.003 "verify_range": { 00:39:50.003 "start": 0, 00:39:50.003 "length": 16384 00:39:50.003 }, 00:39:50.003 "queue_depth": 1024, 00:39:50.003 "io_size": 4096, 00:39:50.003 "runtime": 10.107742, 00:39:50.003 "iops": 12549.885028723527, 00:39:50.003 "mibps": 49.02298839345128, 00:39:50.003 "io_failed": 0, 00:39:50.003 "io_timeout": 0, 00:39:50.003 "avg_latency_us": 81011.34683906239, 00:39:50.003 "min_latency_us": 18599.74095238095, 00:39:50.003 "max_latency_us": 68906.42285714285 00:39:50.003 } 00:39:50.003 ], 00:39:50.003 "core_count": 1 00:39:50.003 } 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1254210 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254210 ']' 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254210 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254210 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254210' 00:39:50.003 killing process with pid 1254210 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254210 00:39:50.003 Received shutdown signal, test time was about 10.000000 seconds 00:39:50.003 00:39:50.003 Latency(us) 00:39:50.003 [2024-12-15T05:30:10.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.003 [2024-12-15T05:30:10.143Z] =================================================================================================================== 00:39:50.003 [2024-12-15T05:30:10.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:50.003 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254210 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:50.263 rmmod nvme_tcp 00:39:50.263 rmmod nvme_fabrics 00:39:50.263 rmmod nvme_keyring 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1254016 ']' 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1254016 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254016 ']' 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254016 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254016 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254016' 00:39:50.263 killing process with pid 1254016 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254016 00:39:50.263 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254016 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.522 06:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:53.057 00:39:53.057 real 0m19.672s 00:39:53.057 user 0m22.914s 00:39:53.057 sys 0m6.173s 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:53.057 ************************************ 00:39:53.057 END TEST nvmf_queue_depth 00:39:53.057 ************************************ 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:53.057 ************************************ 00:39:53.057 START TEST nvmf_target_multipath 00:39:53.057 ************************************ 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:53.057 * Looking for test storage... 00:39:53.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.057 --rc genhtml_branch_coverage=1 00:39:53.057 --rc genhtml_function_coverage=1 00:39:53.057 --rc genhtml_legend=1 00:39:53.057 --rc geninfo_all_blocks=1 00:39:53.057 --rc geninfo_unexecuted_blocks=1 00:39:53.057 00:39:53.057 ' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.057 --rc genhtml_branch_coverage=1 00:39:53.057 --rc genhtml_function_coverage=1 00:39:53.057 --rc genhtml_legend=1 00:39:53.057 --rc geninfo_all_blocks=1 00:39:53.057 --rc geninfo_unexecuted_blocks=1 00:39:53.057 00:39:53.057 ' 00:39:53.057 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.057 --rc genhtml_branch_coverage=1 00:39:53.057 --rc genhtml_function_coverage=1 00:39:53.057 --rc genhtml_legend=1 00:39:53.057 --rc geninfo_all_blocks=1 00:39:53.057 --rc geninfo_unexecuted_blocks=1 00:39:53.057 00:39:53.057 ' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:53.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.058 --rc genhtml_branch_coverage=1 00:39:53.058 --rc genhtml_function_coverage=1 00:39:53.058 --rc genhtml_legend=1 00:39:53.058 --rc geninfo_all_blocks=1 00:39:53.058 --rc geninfo_unexecuted_blocks=1 00:39:53.058 00:39:53.058 ' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:53.058 06:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:58.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:58.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.468 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:58.469 Found net devices under 0000:af:00.0: cvl_0_0 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:58.469 Found net devices under 0000:af:00.1: cvl_0_1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:58.469 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:58.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:58.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:39:58.727 00:39:58.727 --- 10.0.0.2 ping statistics --- 00:39:58.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.727 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:58.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:58.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:39:58.727 00:39:58.727 --- 10.0.0.1 ping statistics --- 00:39:58.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.727 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:58.727 only one NIC for nvmf test 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:58.727 rmmod nvme_tcp 00:39:58.727 rmmod nvme_fabrics 00:39:58.727 rmmod nvme_keyring 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:58.727 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:58.728 06:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.261 00:40:01.261 real 0m8.233s 00:40:01.261 user 0m1.869s 00:40:01.261 sys 0m4.381s 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:01.261 ************************************ 00:40:01.261 END TEST nvmf_target_multipath 00:40:01.261 ************************************ 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:01.261 ************************************ 00:40:01.261 START TEST nvmf_zcopy 00:40:01.261 ************************************ 00:40:01.261 06:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:01.261 * Looking for test storage... 00:40:01.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:01.261 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:01.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.262 --rc genhtml_branch_coverage=1 00:40:01.262 --rc genhtml_function_coverage=1 00:40:01.262 --rc genhtml_legend=1 00:40:01.262 --rc geninfo_all_blocks=1 00:40:01.262 --rc geninfo_unexecuted_blocks=1 00:40:01.262 00:40:01.262 ' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:01.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.262 --rc genhtml_branch_coverage=1 00:40:01.262 --rc genhtml_function_coverage=1 00:40:01.262 --rc genhtml_legend=1 00:40:01.262 --rc geninfo_all_blocks=1 00:40:01.262 --rc geninfo_unexecuted_blocks=1 00:40:01.262 00:40:01.262 ' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:01.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.262 --rc genhtml_branch_coverage=1 00:40:01.262 --rc genhtml_function_coverage=1 00:40:01.262 --rc genhtml_legend=1 00:40:01.262 --rc geninfo_all_blocks=1 00:40:01.262 --rc geninfo_unexecuted_blocks=1 00:40:01.262 00:40:01.262 ' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:01.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.262 --rc genhtml_branch_coverage=1 00:40:01.262 --rc genhtml_function_coverage=1 00:40:01.262 --rc genhtml_legend=1 00:40:01.262 --rc geninfo_all_blocks=1 00:40:01.262 --rc geninfo_unexecuted_blocks=1 00:40:01.262 00:40:01.262 ' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:01.262 06:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:07.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:07.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:07.832 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:07.833 Found net devices under 0000:af:00.0: cvl_0_0 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:07.833 Found net devices under 0000:af:00.1: cvl_0_1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.833 06:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:40:07.833 00:40:07.833 --- 10.0.0.2 ping statistics --- 00:40:07.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.833 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:40:07.833 00:40:07.833 --- 10.0.0.1 ping statistics --- 00:40:07.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.833 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1263243 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1263243 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1263243 ']' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.833 [2024-12-15 06:30:27.119064] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:07.833 [2024-12-15 06:30:27.120054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:07.833 [2024-12-15 06:30:27.120096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.833 [2024-12-15 06:30:27.199179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.833 [2024-12-15 06:30:27.220427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.833 [2024-12-15 06:30:27.220460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.833 [2024-12-15 06:30:27.220467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.833 [2024-12-15 06:30:27.220473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.833 [2024-12-15 06:30:27.220478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.833 [2024-12-15 06:30:27.220944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.833 [2024-12-15 06:30:27.283515] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.833 [2024-12-15 06:30:27.283712] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.833 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.833 [2024-12-15 06:30:27.349595] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.834 [2024-12-15 06:30:27.377818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.834 malloc0 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:07.834 { 00:40:07.834 "params": { 00:40:07.834 "name": "Nvme$subsystem", 00:40:07.834 "trtype": "$TEST_TRANSPORT", 00:40:07.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.834 "adrfam": "ipv4", 00:40:07.834 "trsvcid": "$NVMF_PORT", 00:40:07.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.834 "hdgst": ${hdgst:-false}, 00:40:07.834 "ddgst": ${ddgst:-false} 00:40:07.834 }, 00:40:07.834 "method": "bdev_nvme_attach_controller" 00:40:07.834 } 00:40:07.834 EOF 00:40:07.834 )") 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:07.834 06:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:07.834 "params": { 00:40:07.834 "name": "Nvme1", 00:40:07.834 "trtype": "tcp", 00:40:07.834 "traddr": "10.0.0.2", 00:40:07.834 "adrfam": "ipv4", 00:40:07.834 "trsvcid": "4420", 00:40:07.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:07.834 "hdgst": false, 00:40:07.834 "ddgst": false 00:40:07.834 }, 00:40:07.834 "method": "bdev_nvme_attach_controller" 00:40:07.834 }' 00:40:07.834 [2024-12-15 06:30:27.473924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:07.834 [2024-12-15 06:30:27.473968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263267 ] 00:40:07.834 [2024-12-15 06:30:27.548512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.834 [2024-12-15 06:30:27.570705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.834 Running I/O for 10 seconds... 00:40:09.756 8497.00 IOPS, 66.38 MiB/s [2024-12-15T05:30:30.832Z] 8573.00 IOPS, 66.98 MiB/s [2024-12-15T05:30:32.210Z] 8538.67 IOPS, 66.71 MiB/s [2024-12-15T05:30:32.777Z] 8558.25 IOPS, 66.86 MiB/s [2024-12-15T05:30:34.155Z] 8587.40 IOPS, 67.09 MiB/s [2024-12-15T05:30:35.091Z] 8594.83 IOPS, 67.15 MiB/s [2024-12-15T05:30:36.027Z] 8605.14 IOPS, 67.23 MiB/s [2024-12-15T05:30:36.963Z] 8620.88 IOPS, 67.35 MiB/s [2024-12-15T05:30:37.900Z] 8627.33 IOPS, 67.40 MiB/s [2024-12-15T05:30:37.900Z] 8632.80 IOPS, 67.44 MiB/s 00:40:17.760 Latency(us) 00:40:17.760 [2024-12-15T05:30:37.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.760 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:17.760 Verification LBA range: start 0x0 length 0x1000 00:40:17.760 Nvme1n1 : 10.05 8598.73 67.18 0.00 0.00 14787.20 1950.48 42692.02 00:40:17.760 [2024-12-15T05:30:37.900Z] =================================================================================================================== 00:40:17.760 [2024-12-15T05:30:37.900Z] Total : 8598.73 67.18 0.00 0.00 14787.20 1950.48 42692.02 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1264831 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:18.019 { 00:40:18.019 "params": { 00:40:18.019 "name": "Nvme$subsystem", 00:40:18.019 "trtype": "$TEST_TRANSPORT", 00:40:18.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.019 "adrfam": "ipv4", 00:40:18.019 "trsvcid": "$NVMF_PORT", 00:40:18.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.019 "hdgst": ${hdgst:-false}, 00:40:18.019 "ddgst": ${ddgst:-false} 00:40:18.019 }, 00:40:18.019 "method": "bdev_nvme_attach_controller" 00:40:18.019 } 00:40:18.019 EOF 00:40:18.019 )") 00:40:18.019 [2024-12-15 06:30:37.989280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.019 [2024-12-15 06:30:37.989310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:18.019 06:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:18.019 "params": { 00:40:18.019 "name": "Nvme1", 00:40:18.019 "trtype": "tcp", 00:40:18.019 "traddr": "10.0.0.2", 00:40:18.019 "adrfam": "ipv4", 00:40:18.019 "trsvcid": "4420", 00:40:18.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:18.019 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:18.019 "hdgst": false, 00:40:18.019 "ddgst": false 00:40:18.019 }, 00:40:18.019 "method": "bdev_nvme_attach_controller" 00:40:18.020 }' 00:40:18.020 [2024-12-15 06:30:38.001250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.001265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.013242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.013253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.025241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.025252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.030719] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:18.020 [2024-12-15 06:30:38.030767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264831 ] 00:40:18.020 [2024-12-15 06:30:38.037242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.037257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.049238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.049250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.061240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.061251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.073239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.073249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.085239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.085251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.097240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.097252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.104683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.020 [2024-12-15 06:30:38.109242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.109255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.121240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.121255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.125767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.020 [2024-12-15 06:30:38.133241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.133254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.145251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.145272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.020 [2024-12-15 06:30:38.157249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.020 [2024-12-15 06:30:38.157265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.169242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.169255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.181243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.181256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.193243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.193256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.205251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.205270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.217247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.217264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.229245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.229260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.241240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.241251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.253240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.253251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.265246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.265262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.277242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.277258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.289242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.278 [2024-12-15 06:30:38.289257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.278 [2024-12-15 06:30:38.301242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.301255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.313239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.313257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.325243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.325257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.337239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.337250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.349239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.349250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.361238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.361249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.373242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.373256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.385239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.385250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.397241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.397253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.279 [2024-12-15 06:30:38.409239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.279 [2024-12-15 06:30:38.409251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.421247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.421276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.433242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.433258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 Running I/O for 5 seconds... 00:40:18.538 [2024-12-15 06:30:38.448067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.448087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.462673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.462692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.477269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.477289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.488114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.488133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.502596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.502615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.517042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.517062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.531216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.531235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.545766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.545784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.561817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.561839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.573857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.573875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.589764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.589782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.601835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.601853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.616949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.616984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.630894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.630918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.645573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.645590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.661049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.661069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.538 [2024-12-15 06:30:38.675052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.538 [2024-12-15 06:30:38.675072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.689439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.689459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.700875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.700893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.715166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.715184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.729422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.729441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.741352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.741371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.754982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.755005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.769451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.769470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.780366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.780385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.795220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.795239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.809819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.809837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.824987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.825017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.838726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.838746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.797 [2024-12-15 06:30:38.853293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.797 [2024-12-15 06:30:38.853312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.865225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.865243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.878508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.878528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.893581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.893599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.906113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.906132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.921182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.921201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.798 [2024-12-15 06:30:38.934710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.798 [2024-12-15 06:30:38.934730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:38.949645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:38.949664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:38.965027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:38.965046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:38.978381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:38.978399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:38.993060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:38.993079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.007216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.007236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.021861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.021879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.037220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.037240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.050222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.050241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.065645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.065663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.080641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.080659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.094679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.094702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.109131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.109151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.121006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.121025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.134737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.134756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.149619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.149637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.162971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.162990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.177085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.177104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.057 [2024-12-15 06:30:39.189959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.057 [2024-12-15 06:30:39.189978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.202473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.202493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.213648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.213667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.226689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.226708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.241513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.241532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.255087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.255106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.269572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.269590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.285532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.285550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.298022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.298041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.313362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.313381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.326561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.326580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.337390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.337408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.351455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.351482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.366191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.366210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.380957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.380976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.393700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.393718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.406868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.406886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.421584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.421603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 [2024-12-15 06:30:39.437145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.437164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.316 16936.00 IOPS, 132.31 MiB/s [2024-12-15T05:30:39.456Z] [2024-12-15 06:30:39.450601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.316 [2024-12-15 06:30:39.450620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.465375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.465395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.477531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.477549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.490940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.490958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.505786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.505805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.521396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.521416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.534426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.534445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.549723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.549743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.561084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.575 [2024-12-15 06:30:39.561104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.575 [2024-12-15 06:30:39.575389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.575409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.590069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.590089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.604900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.604921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.618770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.618790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.629574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.629593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.642407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.642426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.657308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.657327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.668517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.668537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.683149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.683171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.697714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.697733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.576 [2024-12-15 06:30:39.713027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.576 [2024-12-15 06:30:39.713046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.727180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.727200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.741944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.741963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.757243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.757264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.770697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.770717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.785344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.785363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.799066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.799086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.814077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.814096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.829152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.829171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.843129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.843150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.857824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.857843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.873439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.873458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.885341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.885360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.899367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.899386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.913772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.913791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.925940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.925958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.940596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.940615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.954274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.954293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.835 [2024-12-15 06:30:39.968618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.835 [2024-12-15 06:30:39.968637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:39.983004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:39.983025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:39.997705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:39.997723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.012946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.012965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.025655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.025674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.039068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.039088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.054941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.054960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.069597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.069615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.085007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.085027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.098899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.098918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.114162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.114181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.129836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.129855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.145571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.145589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.160561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.160581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.174180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.174199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.187021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.187040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.202162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.202180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.217569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.217588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.094 [2024-12-15 06:30:40.230832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.094 [2024-12-15 06:30:40.230850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.353 [2024-12-15 06:30:40.245858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.353 [2024-12-15 06:30:40.245876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.353 [2024-12-15 06:30:40.260978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.353 [2024-12-15 06:30:40.261004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.353 [2024-12-15 06:30:40.275329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.275347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.289960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.289978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.306065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.306084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.320478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.320497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.335393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.335411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.350031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.350050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.365257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.365276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.379514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.379533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.394193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.394212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.408856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.408875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.421005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.421024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.435506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.435524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 16865.50 IOPS, 131.76 MiB/s [2024-12-15T05:30:40.494Z] [2024-12-15 06:30:40.450469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.450488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.465402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.465421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.354 [2024-12-15 06:30:40.479180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.354 [2024-12-15 06:30:40.479199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.493714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.493733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.509390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.509409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.523090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.523110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.538049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.538068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.553434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.553453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.565824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.565843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.581094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.581113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.595697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.595715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.610300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.610318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.625314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.625333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.639105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.639124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.653752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.653770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.669681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.669700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.685524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.685543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.701568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.701591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.717443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.717462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.731115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.731135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.613 [2024-12-15 06:30:40.745574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.613 [2024-12-15 06:30:40.745593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.758096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.758116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.770613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.770631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.786044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.786062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.801212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.801231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.814424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.814443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.829249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.829268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.839700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.839719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.854729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.854749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.869032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.869052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.881714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.881733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.895086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.895105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.909979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.910005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.924707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.924732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.938269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.938287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.953098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.953118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.967658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.967683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.982106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.982125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:40.997449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:40.997485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.873 [2024-12-15 06:30:41.009897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.873 [2024-12-15 06:30:41.009917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.024882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.024902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.039013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.039032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.053589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.053608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.069040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.069060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.082810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.082829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.097806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.097824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.112857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.112876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.126216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.126235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.141062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.141082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.152214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.152233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.166576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.166596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.181190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.181208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.193983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.194009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.207303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.207323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.221797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.221815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.237057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.237081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.250812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.250848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.132 [2024-12-15 06:30:41.265447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.132 [2024-12-15 06:30:41.265467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.276825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.276845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.290783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.290804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.305303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.305322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.316397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.316416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.330932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.330952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.345281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.345300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.358799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.358824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.373632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.373651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.385104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.385123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.399362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.399381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.414219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.414238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.429076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.429096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 16862.33 IOPS, 131.74 MiB/s [2024-12-15T05:30:41.532Z] [2024-12-15 06:30:41.443239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.443257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.458037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.458056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.473428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.473447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.486265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.486283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.501107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.501127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.514097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.514115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.392 [2024-12-15 06:30:41.529208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.392 [2024-12-15 06:30:41.529226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.651 [2024-12-15 06:30:41.542240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.651 [2024-12-15 06:30:41.542259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.557165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.557185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.570948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.570967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.585363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.585383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.599456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.599475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.613564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.613583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.629160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.629180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.643408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.643427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.657882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.657901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.672911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.672930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.686446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.686464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.697522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.697540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.710800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.710819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.725407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.725426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.735662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.735681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.750188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.750206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.765131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.765149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.652 [2024-12-15 06:30:41.777897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.652 [2024-12-15 06:30:41.777915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.793440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.793460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.804755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.804774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.819085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.819104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.833578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.833595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.849055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.849074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.862805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.862824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.877400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.877420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.889862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.889881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.902387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.902406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.917272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.917292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.928389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.928409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.942967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.942985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.957867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.957886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.973187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.973206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.985840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.985858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:41.999110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:41.999129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:42.014111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:42.014134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:42.029336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:42.029356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.912 [2024-12-15 06:30:42.043167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.912 [2024-12-15 06:30:42.043185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.057922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.057942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.073014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.073033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.086863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.086882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.101513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.101530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.116976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.117000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.131047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.131065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.145188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.145206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.156020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.156038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.170730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.170748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.185154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.185172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.198657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.198676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.213249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.213269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.224331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.224350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.238760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.238778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.253488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.171 [2024-12-15 06:30:42.253507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.171 [2024-12-15 06:30:42.264668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.172 [2024-12-15 06:30:42.264687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.172 [2024-12-15 06:30:42.279174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.172 [2024-12-15 06:30:42.279196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.172 [2024-12-15 06:30:42.293776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.172 [2024-12-15 06:30:42.293794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.172 [2024-12-15 06:30:42.308968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.172 [2024-12-15 06:30:42.308987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.322062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.322081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.336804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.336823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.350965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.350983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.365786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.365805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.380723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.380742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.394197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.394217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.409207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.409226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.419951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.419970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.434449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.434470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 16881.75 IOPS, 131.89 MiB/s [2024-12-15T05:30:42.571Z] [2024-12-15 06:30:42.449345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.449366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.460316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.460335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.475174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.475207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.489708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.489726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.504983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.505008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.517913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.517933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.533785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.533805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.548609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.548633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.431 [2024-12-15 06:30:42.562145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.431 [2024-12-15 06:30:42.562164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.576733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.576754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.590367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.590386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.604849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.604869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.617242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.617262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.631061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.631080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.645653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.645672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.660921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.660940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.675031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.675050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.689263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.689281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.701903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.701921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.715374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.715394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.730508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.730527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.745013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.745033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.756174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.756194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.770914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.770933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.785117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.785136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.798069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.798088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.812576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.812600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.691 [2024-12-15 06:30:42.826693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.691 [2024-12-15 06:30:42.826719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.841547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.841567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.856815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.856833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.869867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.869885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.882789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.882807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.897435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.897454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.907774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.907793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.922444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.922462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.932684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.932703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.947364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.947384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.961921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.961940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.977070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.977090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:42.991433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:42.991452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:43.006057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:43.006076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:43.020950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.950 [2024-12-15 06:30:43.020969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.950 [2024-12-15 06:30:43.035057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-15 06:30:43.035075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-15 06:30:43.049349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-15 06:30:43.049368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-15 06:30:43.062845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-15 06:30:43.062864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-15 06:30:43.077438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-15 06:30:43.077456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-15 06:30:43.087927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-15 06:30:43.087946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.102188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.102208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.116982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.117007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.130160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.130178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.144788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.144806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.159187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.159205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.173855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.173874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.188998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.189016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.203457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.203475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.217831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.217850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.233312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.233331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.246302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.246321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.261606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.261625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.276693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.276711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.290977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.291001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.305375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.305394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.316373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.316392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.331374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.331392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.210 [2024-12-15 06:30:43.345975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.210 [2024-12-15 06:30:43.346002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.360723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.360743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.373980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.374005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.389323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.389341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.403443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.403461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.417577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.417594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.432742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.432766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.447179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.447198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 16905.80 IOPS, 132.08 MiB/s 00:40:23.469 Latency(us) 00:40:23.469 [2024-12-15T05:30:43.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.469 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:23.469 Nvme1n1 : 5.01 16907.85 132.09 0.00 0.00 7563.50 1934.87 13419.28 00:40:23.469 [2024-12-15T05:30:43.609Z] =================================================================================================================== 00:40:23.469 [2024-12-15T05:30:43.609Z] Total : 16907.85 132.09 0.00 0.00 7563.50 1934.87 13419.28 00:40:23.469 [2024-12-15 06:30:43.457248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.457265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.469243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.469259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.481257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.481276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.493249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.493266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.505252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.505268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.517246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.517259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.529244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.529257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.541246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.541267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.553245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.553259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.565241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.565252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.577245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.577256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.589240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.589252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.469 [2024-12-15 06:30:43.601241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.469 [2024-12-15 06:30:43.601252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1264831) - No such process 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1264831 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.729 delay0 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.729 06:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:23.729 [2024-12-15 06:30:43.786065] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:31.855 Initializing NVMe Controllers 00:40:31.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:31.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:31.855 Initialization complete. Launching workers. 00:40:31.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 276, failed: 15305 00:40:31.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15505, failed to submit 76 00:40:31.855 success 15391, unsuccessful 114, failed 0 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.855 rmmod nvme_tcp 00:40:31.855 rmmod nvme_fabrics 00:40:31.855 rmmod nvme_keyring 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1263243 ']' 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1263243 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1263243 ']' 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1263243 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:31.855 06:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263243 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263243' 00:40:31.855 killing process with pid 1263243 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1263243 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1263243 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.855 06:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:33.232 00:40:33.232 real 0m32.288s 00:40:33.232 user 0m41.811s 00:40:33.232 sys 0m12.984s 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:33.232 ************************************ 00:40:33.232 END TEST nvmf_zcopy 00:40:33.232 ************************************ 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:33.232 ************************************ 00:40:33.232 START TEST nvmf_nmic 00:40:33.232 ************************************ 00:40:33.232 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:33.491 * Looking for test storage... 00:40:33.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:33.491 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:33.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.492 --rc genhtml_branch_coverage=1 00:40:33.492 --rc genhtml_function_coverage=1 00:40:33.492 --rc genhtml_legend=1 00:40:33.492 --rc geninfo_all_blocks=1 00:40:33.492 --rc geninfo_unexecuted_blocks=1 00:40:33.492 00:40:33.492 ' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:33.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.492 --rc genhtml_branch_coverage=1 00:40:33.492 --rc genhtml_function_coverage=1 00:40:33.492 --rc genhtml_legend=1 00:40:33.492 --rc geninfo_all_blocks=1 00:40:33.492 --rc geninfo_unexecuted_blocks=1 00:40:33.492 00:40:33.492 ' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:33.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.492 --rc genhtml_branch_coverage=1 00:40:33.492 --rc genhtml_function_coverage=1 00:40:33.492 --rc genhtml_legend=1 00:40:33.492 --rc geninfo_all_blocks=1 00:40:33.492 --rc geninfo_unexecuted_blocks=1 00:40:33.492 00:40:33.492 ' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:33.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:33.492 --rc genhtml_branch_coverage=1 00:40:33.492 --rc genhtml_function_coverage=1 00:40:33.492 --rc genhtml_legend=1 00:40:33.492 --rc geninfo_all_blocks=1 00:40:33.492 --rc geninfo_unexecuted_blocks=1 00:40:33.492 00:40:33.492 ' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:33.492 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:33.493 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:33.493 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:33.493 06:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:40.064 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:40.065 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:40.065 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:40.065 Found net devices under 0000:af:00.0: cvl_0_0 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:40.065 Found net devices under 0000:af:00.1: cvl_0_1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:40.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:40.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:40:40.065 00:40:40.065 --- 10.0.0.2 ping statistics --- 00:40:40.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:40.065 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:40.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:40.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:40:40.065 00:40:40.065 --- 10.0.0.1 ping statistics --- 00:40:40.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:40.065 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1270289 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1270289 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1270289 ']' 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.065 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.065 [2024-12-15 06:30:59.553208] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:40.065 [2024-12-15 06:30:59.554178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:40.065 [2024-12-15 06:30:59.554218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:40.065 [2024-12-15 06:30:59.615584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:40.065 [2024-12-15 06:30:59.640517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:40.065 [2024-12-15 06:30:59.640557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:40.066 [2024-12-15 06:30:59.640564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:40.066 [2024-12-15 06:30:59.640571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:40.066 [2024-12-15 06:30:59.640576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:40.066 [2024-12-15 06:30:59.641884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.066 [2024-12-15 06:30:59.642005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:40.066 [2024-12-15 06:30:59.642100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.066 [2024-12-15 06:30:59.642101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:40.066 [2024-12-15 06:30:59.706774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:40.066 [2024-12-15 06:30:59.707747] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:40.066 [2024-12-15 06:30:59.707839] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:40.066 [2024-12-15 06:30:59.708213] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:40.066 [2024-12-15 06:30:59.708268] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 [2024-12-15 06:30:59.774850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 Malloc0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 [2024-12-15 06:30:59.867167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:40.066 test case1: single bdev can't be used in multiple subsystems 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 [2024-12-15 06:30:59.902551] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:40.066 [2024-12-15 06:30:59.902576] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:40.066 [2024-12-15 06:30:59.902584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:40.066 request: 00:40:40.066 { 00:40:40.066 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:40.066 "namespace": { 00:40:40.066 "bdev_name": "Malloc0", 00:40:40.066 "no_auto_visible": false, 00:40:40.066 "hide_metadata": false 00:40:40.066 }, 00:40:40.066 "method": "nvmf_subsystem_add_ns", 00:40:40.066 "req_id": 1 00:40:40.066 } 00:40:40.066 Got JSON-RPC error response 00:40:40.066 response: 00:40:40.066 { 00:40:40.066 "code": -32602, 00:40:40.066 "message": "Invalid parameters" 00:40:40.066 } 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:40.066 Adding namespace failed - expected result. 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:40.066 test case2: host connect to nvmf target in multiple paths 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:40.066 [2024-12-15 06:30:59.914651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.066 06:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:40.066 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:40.325 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:40.325 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:40.325 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:40.325 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:40.325 06:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:42.229 06:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:42.229 [global] 00:40:42.229 thread=1 00:40:42.229 invalidate=1 00:40:42.229 rw=write 00:40:42.229 time_based=1 00:40:42.229 runtime=1 00:40:42.229 ioengine=libaio 00:40:42.229 direct=1 00:40:42.229 bs=4096 00:40:42.229 iodepth=1 00:40:42.229 norandommap=0 00:40:42.229 numjobs=1 00:40:42.229 00:40:42.229 verify_dump=1 00:40:42.229 verify_backlog=512 00:40:42.229 verify_state_save=0 00:40:42.229 do_verify=1 00:40:42.229 verify=crc32c-intel 00:40:42.229 [job0] 00:40:42.229 filename=/dev/nvme0n1 00:40:42.487 Could not set queue depth (nvme0n1) 00:40:42.487 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:42.487 fio-3.35 00:40:42.487 Starting 1 thread 00:40:43.865 00:40:43.865 job0: (groupid=0, jobs=1): err= 0: pid=1270890: Sun Dec 15 06:31:03 2024 00:40:43.865 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:40:43.865 slat (nsec): min=9435, max=23848, avg=22049.74, stdev=2854.47 00:40:43.865 clat (usec): min=40839, max=41118, avg=40960.99, stdev=72.82 00:40:43.865 lat (usec): min=40848, max=41140, avg=40983.04, stdev=73.59 00:40:43.865 clat percentiles (usec): 00:40:43.865 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:40:43.865 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:43.865 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:43.865 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:43.865 | 99.99th=[41157] 00:40:43.865 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:40:43.865 slat (nsec): min=8759, max=36475, avg=10458.33, stdev=1394.93 00:40:43.865 clat (usec): min=127, max=296, avg=137.25, stdev=11.40 00:40:43.865 lat (usec): min=136, max=332, avg=147.71, stdev=12.19 00:40:43.865 clat percentiles (usec): 00:40:43.865 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 133], 00:40:43.865 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 137], 00:40:43.865 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 147], 00:40:43.865 | 99.00th=[ 161], 99.50th=[ 215], 99.90th=[ 297], 99.95th=[ 297], 00:40:43.865 | 99.99th=[ 297] 00:40:43.865 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:40:43.865 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:43.865 lat (usec) : 250=95.33%, 500=0.37% 00:40:43.865 lat (msec) : 50=4.30% 00:40:43.865 cpu : usr=0.20%, sys=0.49%, ctx=538, majf=0, minf=1 00:40:43.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:43.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:43.865 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:43.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:43.865 00:40:43.865 Run status group 0 (all jobs): 00:40:43.865 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:40:43.865 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:40:43.865 00:40:43.865 Disk stats (read/write): 00:40:43.865 nvme0n1: ios=49/512, merge=0/0, ticks=1805/69, in_queue=1874, util=98.50% 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:43.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:43.865 06:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:43.865 rmmod nvme_tcp 00:40:43.865 rmmod nvme_fabrics 00:40:43.865 rmmod nvme_keyring 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1270289 ']' 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1270289 ']' 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270289' 00:40:44.125 killing process with pid 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1270289 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:44.125 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:44.384 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:44.384 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.384 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:44.384 06:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.288 00:40:46.288 real 0m13.002s 00:40:46.288 user 0m23.404s 00:40:46.288 sys 0m5.855s 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:46.288 ************************************ 00:40:46.288 END TEST nvmf_nmic 00:40:46.288 ************************************ 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:46.288 ************************************ 00:40:46.288 START TEST nvmf_fio_target 00:40:46.288 ************************************ 00:40:46.288 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:46.548 * Looking for test storage... 00:40:46.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.548 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:46.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.548 --rc genhtml_branch_coverage=1 00:40:46.548 --rc genhtml_function_coverage=1 00:40:46.548 --rc genhtml_legend=1 00:40:46.548 --rc geninfo_all_blocks=1 00:40:46.548 --rc geninfo_unexecuted_blocks=1 00:40:46.548 00:40:46.548 ' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.549 --rc genhtml_branch_coverage=1 00:40:46.549 --rc genhtml_function_coverage=1 00:40:46.549 --rc genhtml_legend=1 00:40:46.549 --rc geninfo_all_blocks=1 00:40:46.549 --rc geninfo_unexecuted_blocks=1 00:40:46.549 00:40:46.549 ' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.549 --rc genhtml_branch_coverage=1 00:40:46.549 --rc genhtml_function_coverage=1 00:40:46.549 --rc genhtml_legend=1 00:40:46.549 --rc geninfo_all_blocks=1 00:40:46.549 --rc geninfo_unexecuted_blocks=1 00:40:46.549 00:40:46.549 ' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.549 --rc genhtml_branch_coverage=1 00:40:46.549 --rc genhtml_function_coverage=1 00:40:46.549 --rc genhtml_legend=1 00:40:46.549 --rc geninfo_all_blocks=1 00:40:46.549 --rc geninfo_unexecuted_blocks=1 00:40:46.549 00:40:46.549 ' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.549 06:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:53.120 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:53.120 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:53.120 Found net devices under 0000:af:00.0: cvl_0_0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:53.120 Found net devices under 0000:af:00.1: cvl_0_1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:53.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:53.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:40:53.120 00:40:53.120 --- 10.0.0.2 ping statistics --- 00:40:53.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.120 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:53.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:53.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:40:53.120 00:40:53.120 --- 10.0.0.1 ping statistics --- 00:40:53.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:53.120 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1274584 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1274584 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1274584 ']' 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:53.120 [2024-12-15 06:31:12.561306] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:53.120 [2024-12-15 06:31:12.562217] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:53.120 [2024-12-15 06:31:12.562249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:53.120 [2024-12-15 06:31:12.641962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:53.120 [2024-12-15 06:31:12.664547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:53.120 [2024-12-15 06:31:12.664584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:53.120 [2024-12-15 06:31:12.664592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:53.120 [2024-12-15 06:31:12.664598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:53.120 [2024-12-15 06:31:12.664603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:53.120 [2024-12-15 06:31:12.666015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:53.120 [2024-12-15 06:31:12.666088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:53.120 [2024-12-15 06:31:12.666184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.120 [2024-12-15 06:31:12.666185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:53.120 [2024-12-15 06:31:12.729144] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:53.120 [2024-12-15 06:31:12.729673] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:53.120 [2024-12-15 06:31:12.730142] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:53.120 [2024-12-15 06:31:12.730574] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:53.120 [2024-12-15 06:31:12.730613] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:53.120 06:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:53.120 [2024-12-15 06:31:12.974971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.120 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.120 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:53.120 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.379 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:53.379 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.638 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:53.638 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.897 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:53.897 06:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:54.155 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.155 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:54.155 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.413 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:54.413 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:54.672 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:54.672 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:54.930 06:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:54.930 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:54.930 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:55.250 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:55.250 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:55.528 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:55.529 [2024-12-15 06:31:15.558868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.529 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:55.819 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:56.077 06:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:56.336 06:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:58.240 06:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:58.240 [global] 00:40:58.240 thread=1 00:40:58.240 invalidate=1 00:40:58.240 rw=write 00:40:58.240 time_based=1 00:40:58.240 runtime=1 00:40:58.240 ioengine=libaio 00:40:58.240 direct=1 00:40:58.240 bs=4096 00:40:58.240 iodepth=1 00:40:58.240 norandommap=0 00:40:58.240 numjobs=1 00:40:58.240 00:40:58.240 verify_dump=1 00:40:58.240 verify_backlog=512 00:40:58.240 verify_state_save=0 00:40:58.240 do_verify=1 00:40:58.240 verify=crc32c-intel 00:40:58.240 [job0] 00:40:58.240 filename=/dev/nvme0n1 00:40:58.240 [job1] 00:40:58.240 filename=/dev/nvme0n2 00:40:58.240 [job2] 00:40:58.240 filename=/dev/nvme0n3 00:40:58.240 [job3] 00:40:58.240 filename=/dev/nvme0n4 00:40:58.240 Could not set queue depth (nvme0n1) 00:40:58.240 Could not set queue depth (nvme0n2) 00:40:58.240 Could not set queue depth (nvme0n3) 00:40:58.240 Could not set queue depth (nvme0n4) 00:40:58.498 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.498 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.498 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.498 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.498 fio-3.35 00:40:58.498 Starting 4 threads 00:40:59.874 00:40:59.874 job0: (groupid=0, jobs=1): err= 0: pid=1275677: Sun Dec 15 06:31:19 2024 00:40:59.874 read: IOPS=1968, BW=7872KiB/s (8061kB/s)(7880KiB/1001msec) 00:40:59.874 slat (nsec): min=6093, max=56397, avg=7525.74, stdev=1730.47 00:40:59.874 clat (usec): min=188, max=40877, avg=307.34, stdev=943.36 00:40:59.874 lat (usec): min=196, max=40887, avg=314.86, stdev=943.50 00:40:59.874 clat percentiles (usec): 00:40:59.874 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 243], 00:40:59.874 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 265], 00:40:59.874 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 424], 95.00th=[ 474], 00:40:59.874 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[10028], 99.95th=[40633], 00:40:59.874 | 99.99th=[40633] 00:40:59.874 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:40:59.874 slat (nsec): min=5706, max=36651, avg=10894.12, stdev=1714.38 00:40:59.874 clat (usec): min=124, max=449, avg=168.33, stdev=28.68 00:40:59.874 lat (usec): min=135, max=459, avg=179.22, stdev=28.88 00:40:59.874 clat percentiles (usec): 00:40:59.874 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:40:59.874 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 169], 00:40:59.874 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 221], 00:40:59.874 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 322], 00:40:59.874 | 99.99th=[ 449] 00:40:59.874 bw ( KiB/s): min= 8175, max= 8175, per=36.68%, avg=8175.00, stdev= 0.00, samples=1 00:40:59.874 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:40:59.874 lat (usec) : 250=68.42%, 500=30.84%, 750=0.70% 00:40:59.874 lat (msec) : 20=0.02%, 50=0.02% 00:40:59.874 cpu : usr=1.90%, sys=4.00%, ctx=4020, majf=0, minf=1 00:40:59.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.874 issued rwts: total=1970,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:59.874 job1: (groupid=0, jobs=1): err= 0: pid=1275678: Sun Dec 15 06:31:19 2024 00:40:59.874 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:59.875 slat (nsec): min=7038, max=27718, avg=8009.74, stdev=1004.83 00:40:59.875 clat (usec): min=182, max=509, avg=283.86, stdev=66.51 00:40:59.875 lat (usec): min=190, max=520, avg=291.87, stdev=66.57 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:40:59.875 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:40:59.875 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 383], 95.00th=[ 469], 00:40:59.875 | 99.00th=[ 482], 99.50th=[ 490], 99.90th=[ 506], 99.95th=[ 506], 00:40:59.875 | 99.99th=[ 510] 00:40:59.875 write: IOPS=2108, BW=8436KiB/s (8638kB/s)(8444KiB/1001msec); 0 zone resets 00:40:59.875 slat (nsec): min=9084, max=35752, avg=11647.75, stdev=1627.21 00:40:59.875 clat (usec): min=127, max=1378, avg=173.23, stdev=39.15 00:40:59.875 lat (usec): min=138, max=1388, avg=184.88, stdev=39.27 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:40:59.875 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:40:59.875 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 229], 00:40:59.875 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 371], 99.95th=[ 416], 00:40:59.875 | 99.99th=[ 1385] 00:40:59.875 bw ( KiB/s): min= 8192, max= 8192, per=36.75%, avg=8192.00, stdev= 0.00, samples=1 00:40:59.875 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:59.875 lat (usec) : 250=65.79%, 500=34.02%, 750=0.17% 00:40:59.875 lat (msec) : 2=0.02% 00:40:59.875 cpu : usr=3.10%, sys=3.50%, ctx=4159, majf=0, minf=2 00:40:59.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 issued rwts: total=2048,2111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:59.875 job2: (groupid=0, jobs=1): err= 0: pid=1275679: Sun Dec 15 06:31:19 2024 00:40:59.875 read: IOPS=558, BW=2235KiB/s (2288kB/s)(2284KiB/1022msec) 00:40:59.875 slat (nsec): min=7069, max=30795, avg=8880.49, stdev=2773.87 00:40:59.875 clat (usec): min=194, max=41457, avg=1415.92, stdev=6700.68 00:40:59.875 lat (usec): min=202, max=41466, avg=1424.80, stdev=6701.01 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:40:59.875 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 277], 00:40:59.875 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 371], 95.00th=[ 453], 00:40:59.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:40:59.875 | 99.99th=[41681] 00:40:59.875 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:40:59.875 slat (usec): min=10, max=561, avg=15.08, stdev=17.61 00:40:59.875 clat (usec): min=142, max=1489, avg=178.73, stdev=45.07 00:40:59.875 lat (usec): min=153, max=1500, avg=193.80, stdev=48.55 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 161], 20.00th=[ 165], 00:40:59.875 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:40:59.875 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 217], 00:40:59.875 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 363], 99.95th=[ 1483], 00:40:59.875 | 99.99th=[ 1483] 00:40:59.875 bw ( KiB/s): min= 8192, max= 8192, per=36.75%, avg=8192.00, stdev= 0.00, samples=1 00:40:59.875 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:40:59.875 lat (usec) : 250=83.13%, 500=15.61%, 750=0.06% 00:40:59.875 lat (msec) : 2=0.13%, 10=0.06%, 50=1.00% 00:40:59.875 cpu : usr=1.08%, sys=1.76%, ctx=1598, majf=0, minf=1 00:40:59.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 issued rwts: total=571,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:59.875 job3: (groupid=0, jobs=1): err= 0: pid=1275680: Sun Dec 15 06:31:19 2024 00:40:59.875 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:40:59.875 slat (nsec): min=10362, max=26162, avg=20579.64, stdev=4762.73 00:40:59.875 clat (usec): min=40838, max=41317, avg=40985.32, stdev=92.68 00:40:59.875 lat (usec): min=40862, max=41328, avg=41005.90, stdev=90.42 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:59.875 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:59.875 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:59.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:59.875 | 99.99th=[41157] 00:40:59.875 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:40:59.875 slat (nsec): min=10959, max=40855, avg=12825.34, stdev=2437.17 00:40:59.875 clat (usec): min=150, max=886, avg=183.88, stdev=42.27 00:40:59.875 lat (usec): min=164, max=898, avg=196.71, stdev=42.55 00:40:59.875 clat percentiles (usec): 00:40:59.875 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:40:59.875 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:40:59.875 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 212], 00:40:59.875 | 99.00th=[ 273], 99.50th=[ 306], 99.90th=[ 889], 99.95th=[ 889], 00:40:59.875 | 99.99th=[ 889] 00:40:59.875 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:40:59.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:59.875 lat (usec) : 250=94.01%, 500=1.50%, 750=0.19%, 1000=0.19% 00:40:59.875 lat (msec) : 50=4.12% 00:40:59.875 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=1 00:40:59.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.875 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:59.875 00:40:59.875 Run status group 0 (all jobs): 00:40:59.875 READ: bw=17.6MiB/s (18.5MB/s), 87.1KiB/s-8184KiB/s (89.2kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1022msec 00:40:59.875 WRITE: bw=21.8MiB/s (22.8MB/s), 2028KiB/s-8436KiB/s (2076kB/s-8638kB/s), io=22.2MiB (23.3MB), run=1001-1022msec 00:40:59.875 00:40:59.875 Disk stats (read/write): 00:40:59.875 nvme0n1: ios=1559/1977, merge=0/0, ticks=1331/323, in_queue=1654, util=85.57% 00:40:59.875 nvme0n2: ios=1678/2048, merge=0/0, ticks=488/347, in_queue=835, util=90.95% 00:40:59.875 nvme0n3: ios=605/1024, merge=0/0, ticks=846/178, in_queue=1024, util=95.11% 00:40:59.875 nvme0n4: ios=41/512, merge=0/0, ticks=1643/88, in_queue=1731, util=93.81% 00:40:59.875 06:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:59.875 [global] 00:40:59.875 thread=1 00:40:59.875 invalidate=1 00:40:59.875 rw=randwrite 00:40:59.875 time_based=1 00:40:59.875 runtime=1 00:40:59.875 ioengine=libaio 00:40:59.875 direct=1 00:40:59.875 bs=4096 00:40:59.875 iodepth=1 00:40:59.875 norandommap=0 00:40:59.875 numjobs=1 00:40:59.875 00:40:59.875 verify_dump=1 00:40:59.875 verify_backlog=512 00:40:59.875 verify_state_save=0 00:40:59.875 do_verify=1 00:40:59.875 verify=crc32c-intel 00:40:59.875 [job0] 00:40:59.875 filename=/dev/nvme0n1 00:40:59.875 [job1] 00:40:59.875 filename=/dev/nvme0n2 00:40:59.875 [job2] 00:40:59.875 filename=/dev/nvme0n3 00:40:59.875 [job3] 00:40:59.875 filename=/dev/nvme0n4 00:40:59.875 Could not set queue depth (nvme0n1) 00:40:59.875 Could not set queue depth (nvme0n2) 00:40:59.875 Could not set queue depth (nvme0n3) 00:40:59.875 Could not set queue depth (nvme0n4) 00:41:00.134 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:00.134 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:00.134 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:00.134 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:00.134 fio-3.35 00:41:00.134 Starting 4 threads 00:41:01.510 00:41:01.510 job0: (groupid=0, jobs=1): err= 0: pid=1276046: Sun Dec 15 06:31:21 2024 00:41:01.510 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:41:01.510 slat (nsec): min=9539, max=23870, avg=19886.00, stdev=5359.06 00:41:01.510 clat (usec): min=40855, max=42000, avg=41038.82, stdev=232.19 00:41:01.510 lat (usec): min=40877, max=42023, avg=41058.70, stdev=231.91 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:01.510 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:01.510 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:01.510 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:01.510 | 99.99th=[42206] 00:41:01.510 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:41:01.510 slat (usec): min=9, max=2523, avg=15.79, stdev=111.05 00:41:01.510 clat (usec): min=126, max=1498, avg=188.67, stdev=62.19 00:41:01.510 lat (usec): min=137, max=2719, avg=204.46, stdev=127.60 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[ 131], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 172], 00:41:01.510 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:41:01.510 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 225], 00:41:01.510 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 1500], 99.95th=[ 1500], 00:41:01.510 | 99.99th=[ 1500] 00:41:01.510 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:41:01.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:01.510 lat (usec) : 250=94.94%, 500=0.75% 00:41:01.510 lat (msec) : 2=0.19%, 50=4.12% 00:41:01.510 cpu : usr=0.20%, sys=0.69%, ctx=537, majf=0, minf=1 00:41:01.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.510 job1: (groupid=0, jobs=1): err= 0: pid=1276047: Sun Dec 15 06:31:21 2024 00:41:01.510 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:41:01.510 slat (nsec): min=9735, max=23856, avg=15368.68, stdev=3616.16 00:41:01.510 clat (usec): min=40611, max=41163, avg=40958.53, stdev=97.71 00:41:01.510 lat (usec): min=40621, max=41177, avg=40973.90, stdev=98.47 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:01.510 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:01.510 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:01.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:01.510 | 99.99th=[41157] 00:41:01.510 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:01.510 slat (nsec): min=9523, max=52985, avg=11146.79, stdev=2570.44 00:41:01.510 clat (usec): min=143, max=255, avg=179.48, stdev=13.10 00:41:01.510 lat (usec): min=166, max=286, avg=190.63, stdev=13.50 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:41:01.510 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:41:01.510 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:41:01.510 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 255], 99.95th=[ 255], 00:41:01.510 | 99.99th=[ 255] 00:41:01.510 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:41:01.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:01.510 lat (usec) : 250=95.69%, 500=0.19% 00:41:01.510 lat (msec) : 50=4.12% 00:41:01.510 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=2 00:41:01.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.510 job2: (groupid=0, jobs=1): err= 0: pid=1276048: Sun Dec 15 06:31:21 2024 00:41:01.510 read: IOPS=1917, BW=7668KiB/s (7852kB/s)(7676KiB/1001msec) 00:41:01.510 slat (nsec): min=6716, max=29432, avg=7588.35, stdev=1107.33 00:41:01.510 clat (usec): min=197, max=41937, avg=323.64, stdev=1868.73 00:41:01.510 lat (usec): min=204, max=41947, avg=331.23, stdev=1869.31 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 221], 00:41:01.510 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 245], 00:41:01.510 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:41:01.510 | 99.00th=[ 285], 99.50th=[ 433], 99.90th=[41157], 99.95th=[41681], 00:41:01.510 | 99.99th=[41681] 00:41:01.510 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:01.510 slat (nsec): min=9543, max=43021, avg=10596.74, stdev=1303.20 00:41:01.510 clat (usec): min=132, max=403, avg=162.54, stdev=15.31 00:41:01.510 lat (usec): min=142, max=413, avg=173.14, stdev=15.53 00:41:01.510 clat percentiles (usec): 00:41:01.510 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:41:01.510 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:41:01.510 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:41:01.510 | 99.00th=[ 200], 99.50th=[ 215], 99.90th=[ 355], 99.95th=[ 400], 00:41:01.510 | 99.99th=[ 404] 00:41:01.510 bw ( KiB/s): min= 8192, max= 8192, per=36.76%, avg=8192.00, stdev= 0.00, samples=1 00:41:01.510 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:01.510 lat (usec) : 250=91.00%, 500=8.90% 00:41:01.510 lat (msec) : 50=0.10% 00:41:01.510 cpu : usr=2.60%, sys=3.10%, ctx=3969, majf=0, minf=1 00:41:01.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.510 issued rwts: total=1919,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.511 job3: (groupid=0, jobs=1): err= 0: pid=1276050: Sun Dec 15 06:31:21 2024 00:41:01.511 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec) 00:41:01.511 slat (nsec): min=7194, max=37483, avg=8338.59, stdev=1268.38 00:41:01.511 clat (usec): min=185, max=671, avg=229.79, stdev=24.52 00:41:01.511 lat (usec): min=192, max=680, avg=238.13, stdev=24.63 00:41:01.511 clat percentiles (usec): 00:41:01.511 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:41:01.511 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:41:01.511 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:41:01.511 | 99.00th=[ 277], 99.50th=[ 334], 99.90th=[ 562], 99.95th=[ 578], 00:41:01.511 | 99.99th=[ 676] 00:41:01.511 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:01.511 slat (nsec): min=10371, max=37462, avg=11822.99, stdev=1836.98 00:41:01.511 clat (usec): min=127, max=453, avg=163.08, stdev=18.21 00:41:01.511 lat (usec): min=139, max=475, avg=174.91, stdev=18.90 00:41:01.511 clat percentiles (usec): 00:41:01.511 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 151], 00:41:01.511 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:41:01.511 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:41:01.511 | 99.00th=[ 217], 99.50th=[ 239], 99.90th=[ 281], 99.95th=[ 318], 00:41:01.511 | 99.99th=[ 453] 00:41:01.511 bw ( KiB/s): min=11224, max=11224, per=50.37%, avg=11224.00, stdev= 0.00, samples=1 00:41:01.511 iops : min= 2806, max= 2806, avg=2806.00, stdev= 0.00, samples=1 00:41:01.511 lat (usec) : 250=96.68%, 500=3.24%, 750=0.08% 00:41:01.511 cpu : usr=4.10%, sys=7.70%, ctx=4815, majf=0, minf=1 00:41:01.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:01.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.511 issued rwts: total=2254,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:01.511 00:41:01.511 Run status group 0 (all jobs): 00:41:01.511 READ: bw=16.3MiB/s (17.1MB/s), 87.0KiB/s-9007KiB/s (89.1kB/s-9223kB/s), io=16.5MiB (17.3MB), run=1001-1011msec 00:41:01.511 WRITE: bw=21.8MiB/s (22.8MB/s), 2026KiB/s-9.99MiB/s (2074kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1011msec 00:41:01.511 00:41:01.511 Disk stats (read/write): 00:41:01.511 nvme0n1: ios=61/512, merge=0/0, ticks=1042/94, in_queue=1136, util=90.58% 00:41:01.511 nvme0n2: ios=68/512, merge=0/0, ticks=799/91, in_queue=890, util=91.17% 00:41:01.511 nvme0n3: ios=1581/1734, merge=0/0, ticks=1478/279, in_queue=1757, util=97.30% 00:41:01.511 nvme0n4: ios=2093/2048, merge=0/0, ticks=999/318, in_queue=1317, util=98.22% 00:41:01.511 06:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:01.511 [global] 00:41:01.511 thread=1 00:41:01.511 invalidate=1 00:41:01.511 rw=write 00:41:01.511 time_based=1 00:41:01.511 runtime=1 00:41:01.511 ioengine=libaio 00:41:01.511 direct=1 00:41:01.511 bs=4096 00:41:01.511 iodepth=128 00:41:01.511 norandommap=0 00:41:01.511 numjobs=1 00:41:01.511 00:41:01.511 verify_dump=1 00:41:01.511 verify_backlog=512 00:41:01.511 verify_state_save=0 00:41:01.511 do_verify=1 00:41:01.511 verify=crc32c-intel 00:41:01.511 [job0] 00:41:01.511 filename=/dev/nvme0n1 00:41:01.511 [job1] 00:41:01.511 filename=/dev/nvme0n2 00:41:01.511 [job2] 00:41:01.511 filename=/dev/nvme0n3 00:41:01.511 [job3] 00:41:01.511 filename=/dev/nvme0n4 00:41:01.511 Could not set queue depth (nvme0n1) 00:41:01.511 Could not set queue depth (nvme0n2) 00:41:01.511 Could not set queue depth (nvme0n3) 00:41:01.511 Could not set queue depth (nvme0n4) 00:41:01.770 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:01.770 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:01.770 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:01.770 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:01.770 fio-3.35 00:41:01.770 Starting 4 threads 00:41:03.169 00:41:03.169 job0: (groupid=0, jobs=1): err= 0: pid=1276421: Sun Dec 15 06:31:23 2024 00:41:03.169 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:41:03.169 slat (nsec): min=1348, max=8596.4k, avg=73559.79, stdev=596544.52 00:41:03.169 clat (usec): min=4517, max=21010, avg=9578.28, stdev=2657.77 00:41:03.169 lat (usec): min=4525, max=21578, avg=9651.84, stdev=2712.82 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7308], 00:41:03.169 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:41:03.169 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13566], 95.00th=[15664], 00:41:03.169 | 99.00th=[17171], 99.50th=[17957], 99.90th=[20841], 99.95th=[21103], 00:41:03.169 | 99.99th=[21103] 00:41:03.169 write: IOPS=7021, BW=27.4MiB/s (28.8MB/s)(27.6MiB/1008msec); 0 zone resets 00:41:03.169 slat (usec): min=2, max=8904, avg=65.79, stdev=469.46 00:41:03.169 clat (usec): min=1501, max=26206, avg=9031.59, stdev=3059.88 00:41:03.169 lat (usec): min=1516, max=26220, avg=9097.37, stdev=3087.67 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6849], 00:41:03.169 | 30.00th=[ 7439], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9372], 00:41:03.169 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[11600], 95.00th=[13042], 00:41:03.169 | 99.00th=[25822], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:41:03.169 | 99.99th=[26084] 00:41:03.169 bw ( KiB/s): min=25496, max=30112, per=37.88%, avg=27804.00, stdev=3264.00, samples=2 00:41:03.169 iops : min= 6374, max= 7528, avg=6951.00, stdev=816.00, samples=2 00:41:03.169 lat (msec) : 2=0.01%, 4=0.34%, 10=74.41%, 20=24.18%, 50=1.05% 00:41:03.169 cpu : usr=5.06%, sys=9.33%, ctx=474, majf=0, minf=2 00:41:03.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:41:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.169 issued rwts: total=6656,7078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.169 job1: (groupid=0, jobs=1): err= 0: pid=1276442: Sun Dec 15 06:31:23 2024 00:41:03.169 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:41:03.169 slat (nsec): min=1597, max=7089.4k, avg=109294.36, stdev=608378.06 00:41:03.169 clat (usec): min=8464, max=26793, avg=13883.47, stdev=2523.10 00:41:03.169 lat (usec): min=8475, max=26869, avg=13992.76, stdev=2583.85 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 9503], 5.00th=[11600], 10.00th=[11863], 20.00th=[12125], 00:41:03.169 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:41:03.169 | 70.00th=[14353], 80.00th=[15795], 90.00th=[17695], 95.00th=[19268], 00:41:03.169 | 99.00th=[20579], 99.50th=[22676], 99.90th=[23462], 99.95th=[26608], 00:41:03.169 | 99.99th=[26870] 00:41:03.169 write: IOPS=2802, BW=10.9MiB/s (11.5MB/s)(11.1MiB/1013msec); 0 zone resets 00:41:03.169 slat (usec): min=2, max=21852, avg=248.05, stdev=1289.12 00:41:03.169 clat (msec): min=5, max=106, avg=32.15, stdev=22.62 00:41:03.169 lat (msec): min=7, max=106, avg=32.40, stdev=22.75 00:41:03.169 clat percentiles (msec): 00:41:03.169 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:41:03.169 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 29], 00:41:03.169 | 70.00th=[ 40], 80.00th=[ 49], 90.00th=[ 62], 95.00th=[ 87], 00:41:03.169 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:41:03.169 | 99.99th=[ 107] 00:41:03.169 bw ( KiB/s): min= 8512, max=13176, per=14.77%, avg=10844.00, stdev=3297.95, samples=2 00:41:03.169 iops : min= 2128, max= 3294, avg=2711.00, stdev=824.49, samples=2 00:41:03.169 lat (msec) : 10=1.02%, 20=66.57%, 50=22.52%, 100=9.22%, 250=0.67% 00:41:03.169 cpu : usr=2.17%, sys=4.84%, ctx=270, majf=0, minf=1 00:41:03.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.169 issued rwts: total=2560,2839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.169 job2: (groupid=0, jobs=1): err= 0: pid=1276470: Sun Dec 15 06:31:23 2024 00:41:03.169 read: IOPS=3629, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1010msec) 00:41:03.169 slat (nsec): min=1367, max=11174k, avg=102399.10, stdev=791536.49 00:41:03.169 clat (usec): min=1141, max=34478, avg=12837.53, stdev=4713.89 00:41:03.169 lat (usec): min=1166, max=34489, avg=12939.93, stdev=4785.48 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 3490], 5.00th=[ 5145], 10.00th=[ 7963], 20.00th=[10028], 00:41:03.169 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:41:03.169 | 70.00th=[13698], 80.00th=[14615], 90.00th=[18744], 95.00th=[22938], 00:41:03.169 | 99.00th=[29492], 99.50th=[30802], 99.90th=[34341], 99.95th=[34341], 00:41:03.169 | 99.99th=[34341] 00:41:03.169 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:41:03.169 slat (usec): min=2, max=9318, avg=131.68, stdev=688.53 00:41:03.169 clat (usec): min=274, max=60665, avg=19756.32, stdev=14350.88 00:41:03.169 lat (usec): min=536, max=60673, avg=19887.99, stdev=14450.98 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 914], 5.00th=[ 5342], 10.00th=[ 7046], 20.00th=[ 9241], 00:41:03.169 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11863], 60.00th=[16188], 00:41:03.169 | 70.00th=[26608], 80.00th=[35914], 90.00th=[43779], 95.00th=[47973], 00:41:03.169 | 99.00th=[51119], 99.50th=[53740], 99.90th=[57934], 99.95th=[57934], 00:41:03.169 | 99.99th=[60556] 00:41:03.169 bw ( KiB/s): min=16016, max=16384, per=22.07%, avg=16200.00, stdev=260.22, samples=2 00:41:03.169 iops : min= 4004, max= 4096, avg=4050.00, stdev=65.05, samples=2 00:41:03.169 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.46% 00:41:03.169 lat (msec) : 2=0.71%, 4=2.00%, 10=23.82%, 20=50.08%, 50=21.94% 00:41:03.169 lat (msec) : 100=0.91% 00:41:03.169 cpu : usr=2.87%, sys=5.65%, ctx=358, majf=0, minf=1 00:41:03.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:03.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.169 issued rwts: total=3666,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.169 job3: (groupid=0, jobs=1): err= 0: pid=1276480: Sun Dec 15 06:31:23 2024 00:41:03.169 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:41:03.169 slat (nsec): min=1023, max=10313k, avg=97994.62, stdev=647133.82 00:41:03.169 clat (usec): min=1283, max=35624, avg=13007.20, stdev=4977.48 00:41:03.169 lat (usec): min=1287, max=35632, avg=13105.20, stdev=5018.26 00:41:03.169 clat percentiles (usec): 00:41:03.169 | 1.00th=[ 1385], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[10552], 00:41:03.169 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:41:03.170 | 70.00th=[13304], 80.00th=[15008], 90.00th=[17433], 95.00th=[20841], 00:41:03.170 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:41:03.170 | 99.99th=[35390] 00:41:03.170 write: IOPS=4516, BW=17.6MiB/s (18.5MB/s)(17.9MiB/1013msec); 0 zone resets 00:41:03.170 slat (nsec): min=1794, max=12518k, avg=121167.56, stdev=689971.19 00:41:03.170 clat (usec): min=1282, max=97936, avg=16459.42, stdev=13446.26 00:41:03.170 lat (usec): min=1293, max=97940, avg=16580.59, stdev=13521.10 00:41:03.170 clat percentiles (usec): 00:41:03.170 | 1.00th=[ 3097], 5.00th=[ 6390], 10.00th=[ 8979], 20.00th=[10290], 00:41:03.170 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:41:03.170 | 70.00th=[12649], 80.00th=[20317], 90.00th=[30540], 95.00th=[42730], 00:41:03.170 | 99.00th=[84411], 99.50th=[92799], 99.90th=[98042], 99.95th=[98042], 00:41:03.170 | 99.99th=[98042] 00:41:03.170 bw ( KiB/s): min=16304, max=19272, per=24.24%, avg=17788.00, stdev=2098.69, samples=2 00:41:03.170 iops : min= 4076, max= 4818, avg=4447.00, stdev=524.67, samples=2 00:41:03.170 lat (msec) : 2=1.19%, 4=0.75%, 10=14.10%, 20=70.65%, 50=11.66% 00:41:03.170 lat (msec) : 100=1.65% 00:41:03.170 cpu : usr=1.98%, sys=4.35%, ctx=447, majf=0, minf=2 00:41:03.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:03.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.170 issued rwts: total=4096,4575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.170 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.170 00:41:03.170 Run status group 0 (all jobs): 00:41:03.170 READ: bw=65.5MiB/s (68.6MB/s), 9.87MiB/s-25.8MiB/s (10.4MB/s-27.0MB/s), io=66.3MiB (69.5MB), run=1008-1013msec 00:41:03.170 WRITE: bw=71.7MiB/s (75.2MB/s), 10.9MiB/s-27.4MiB/s (11.5MB/s-28.8MB/s), io=72.6MiB (76.1MB), run=1008-1013msec 00:41:03.170 00:41:03.170 Disk stats (read/write): 00:41:03.170 nvme0n1: ios=5128/5127, merge=0/0, ticks=50208/47972, in_queue=98180, util=96.49% 00:41:03.170 nvme0n2: ios=2086/2407, merge=0/0, ticks=12462/30742, in_queue=43204, util=98.97% 00:41:03.170 nvme0n3: ios=3130/3191, merge=0/0, ticks=38226/59455, in_queue=97681, util=97.17% 00:41:03.170 nvme0n4: ios=3843/4096, merge=0/0, ticks=25592/25411, in_queue=51003, util=89.05% 00:41:03.170 06:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:03.170 [global] 00:41:03.170 thread=1 00:41:03.170 invalidate=1 00:41:03.170 rw=randwrite 00:41:03.170 time_based=1 00:41:03.170 runtime=1 00:41:03.170 ioengine=libaio 00:41:03.170 direct=1 00:41:03.170 bs=4096 00:41:03.170 iodepth=128 00:41:03.170 norandommap=0 00:41:03.170 numjobs=1 00:41:03.170 00:41:03.170 verify_dump=1 00:41:03.170 verify_backlog=512 00:41:03.170 verify_state_save=0 00:41:03.170 do_verify=1 00:41:03.170 verify=crc32c-intel 00:41:03.170 [job0] 00:41:03.170 filename=/dev/nvme0n1 00:41:03.170 [job1] 00:41:03.170 filename=/dev/nvme0n2 00:41:03.170 [job2] 00:41:03.170 filename=/dev/nvme0n3 00:41:03.170 [job3] 00:41:03.170 filename=/dev/nvme0n4 00:41:03.170 Could not set queue depth (nvme0n1) 00:41:03.170 Could not set queue depth (nvme0n2) 00:41:03.170 Could not set queue depth (nvme0n3) 00:41:03.170 Could not set queue depth (nvme0n4) 00:41:03.440 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:03.440 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:03.440 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:03.440 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:03.440 fio-3.35 00:41:03.440 Starting 4 threads 00:41:04.813 00:41:04.813 job0: (groupid=0, jobs=1): err= 0: pid=1276877: Sun Dec 15 06:31:24 2024 00:41:04.813 read: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1004msec) 00:41:04.813 slat (nsec): min=995, max=12231k, avg=123121.47, stdev=778478.87 00:41:04.813 clat (usec): min=1268, max=61120, avg=16166.29, stdev=8234.93 00:41:04.813 lat (usec): min=3666, max=61127, avg=16289.41, stdev=8258.20 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 4293], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 8356], 00:41:04.813 | 30.00th=[10552], 40.00th=[13435], 50.00th=[15139], 60.00th=[16057], 00:41:04.813 | 70.00th=[19268], 80.00th=[22676], 90.00th=[26608], 95.00th=[29754], 00:41:04.813 | 99.00th=[43779], 99.50th=[44827], 99.90th=[61080], 99.95th=[61080], 00:41:04.813 | 99.99th=[61080] 00:41:04.813 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:41:04.813 slat (nsec): min=1670, max=16224k, avg=146881.74, stdev=814093.99 00:41:04.813 clat (usec): min=1038, max=75504, avg=20261.43, stdev=16554.97 00:41:04.813 lat (usec): min=1047, max=75514, avg=20408.31, stdev=16643.52 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 2343], 5.00th=[ 4178], 10.00th=[ 4752], 20.00th=[ 7570], 00:41:04.813 | 30.00th=[ 8225], 40.00th=[ 9896], 50.00th=[14353], 60.00th=[22414], 00:41:04.813 | 70.00th=[25035], 80.00th=[30802], 90.00th=[42730], 95.00th=[55837], 00:41:04.813 | 99.00th=[71828], 99.50th=[73925], 99.90th=[74974], 99.95th=[76022], 00:41:04.813 | 99.99th=[76022] 00:41:04.813 bw ( KiB/s): min= 8456, max=20216, per=23.53%, avg=14336.00, stdev=8315.58, samples=2 00:41:04.813 iops : min= 2114, max= 5054, avg=3584.00, stdev=2078.89, samples=2 00:41:04.813 lat (msec) : 2=0.37%, 4=1.05%, 10=32.75%, 20=31.17%, 50=30.73% 00:41:04.813 lat (msec) : 100=3.93% 00:41:04.813 cpu : usr=2.49%, sys=3.29%, ctx=422, majf=0, minf=2 00:41:04.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:04.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.813 issued rwts: total=3384,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.813 job1: (groupid=0, jobs=1): err= 0: pid=1276890: Sun Dec 15 06:31:24 2024 00:41:04.813 read: IOPS=2673, BW=10.4MiB/s (11.0MB/s)(10.5MiB/1003msec) 00:41:04.813 slat (nsec): min=1479, max=17412k, avg=126118.58, stdev=998575.01 00:41:04.813 clat (usec): min=2639, max=78363, avg=17087.80, stdev=9608.91 00:41:04.813 lat (usec): min=2644, max=80847, avg=17213.92, stdev=9695.98 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10290], 00:41:04.813 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13566], 60.00th=[15270], 00:41:04.813 | 70.00th=[19268], 80.00th=[24773], 90.00th=[27395], 95.00th=[31851], 00:41:04.813 | 99.00th=[58459], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:41:04.813 | 99.99th=[78119] 00:41:04.813 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:41:04.813 slat (usec): min=2, max=14712, avg=201.54, stdev=942.78 00:41:04.813 clat (usec): min=1347, max=76826, avg=26430.38, stdev=16827.45 00:41:04.813 lat (usec): min=1355, max=76833, avg=26631.93, stdev=16928.50 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 4113], 5.00th=[ 7570], 10.00th=[ 9241], 20.00th=[10421], 00:41:04.813 | 30.00th=[13304], 40.00th=[19268], 50.00th=[24249], 60.00th=[27395], 00:41:04.813 | 70.00th=[33162], 80.00th=[40109], 90.00th=[50070], 95.00th=[62653], 00:41:04.813 | 99.00th=[73925], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:41:04.813 | 99.99th=[77071] 00:41:04.813 bw ( KiB/s): min=11616, max=12920, per=20.13%, avg=12268.00, stdev=922.07, samples=2 00:41:04.813 iops : min= 2904, max= 3230, avg=3067.00, stdev=230.52, samples=2 00:41:04.813 lat (msec) : 2=0.21%, 4=0.21%, 10=15.87%, 20=40.01%, 50=37.54% 00:41:04.813 lat (msec) : 100=6.17% 00:41:04.813 cpu : usr=2.59%, sys=3.99%, ctx=306, majf=0, minf=1 00:41:04.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:04.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.813 issued rwts: total=2682,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.813 job2: (groupid=0, jobs=1): err= 0: pid=1276899: Sun Dec 15 06:31:24 2024 00:41:04.813 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:41:04.813 slat (nsec): min=1153, max=12870k, avg=95997.43, stdev=729942.11 00:41:04.813 clat (usec): min=1370, max=57472, avg=13642.48, stdev=8254.37 00:41:04.813 lat (usec): min=1375, max=60043, avg=13738.48, stdev=8320.20 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 3687], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 8094], 00:41:04.813 | 30.00th=[ 9372], 40.00th=[10814], 50.00th=[11207], 60.00th=[12518], 00:41:04.813 | 70.00th=[14091], 80.00th=[15926], 90.00th=[26608], 95.00th=[31327], 00:41:04.813 | 99.00th=[43254], 99.50th=[45351], 99.90th=[56361], 99.95th=[57410], 00:41:04.813 | 99.99th=[57410] 00:41:04.813 write: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1005msec); 0 zone resets 00:41:04.813 slat (nsec): min=1876, max=38485k, avg=113856.22, stdev=895389.40 00:41:04.813 clat (usec): min=1089, max=86973, avg=15680.13, stdev=13884.53 00:41:04.813 lat (usec): min=1131, max=86985, avg=15793.99, stdev=13953.80 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 3097], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 8291], 00:41:04.813 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11076], 00:41:04.813 | 70.00th=[14484], 80.00th=[23200], 90.00th=[31851], 95.00th=[36439], 00:41:04.813 | 99.00th=[80217], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:41:04.813 | 99.99th=[86508] 00:41:04.813 bw ( KiB/s): min=16384, max=19064, per=29.09%, avg=17724.00, stdev=1895.05, samples=2 00:41:04.813 iops : min= 4096, max= 4766, avg=4431.00, stdev=473.76, samples=2 00:41:04.813 lat (msec) : 2=0.17%, 4=1.48%, 10=41.02%, 20=38.04%, 50=17.24% 00:41:04.813 lat (msec) : 100=2.05% 00:41:04.813 cpu : usr=3.19%, sys=5.08%, ctx=408, majf=0, minf=1 00:41:04.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:04.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.813 issued rwts: total=4096,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.813 job3: (groupid=0, jobs=1): err= 0: pid=1276904: Sun Dec 15 06:31:24 2024 00:41:04.813 read: IOPS=3694, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1003msec) 00:41:04.813 slat (nsec): min=1151, max=11794k, avg=106686.54, stdev=736881.43 00:41:04.813 clat (usec): min=1802, max=56730, avg=13285.53, stdev=8853.69 00:41:04.813 lat (usec): min=1810, max=56739, avg=13392.21, stdev=8939.14 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 4178], 5.00th=[ 4752], 10.00th=[ 5800], 20.00th=[ 9241], 00:41:04.813 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11863], 00:41:04.813 | 70.00th=[13435], 80.00th=[14484], 90.00th=[17957], 95.00th=[34866], 00:41:04.813 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:41:04.813 | 99.99th=[56886] 00:41:04.813 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:41:04.813 slat (nsec): min=1988, max=14423k, avg=118177.92, stdev=632493.17 00:41:04.813 clat (usec): min=288, max=83511, avg=19043.48, stdev=17043.77 00:41:04.813 lat (usec): min=329, max=83521, avg=19161.66, stdev=17160.74 00:41:04.813 clat percentiles (usec): 00:41:04.813 | 1.00th=[ 668], 5.00th=[ 1991], 10.00th=[ 3916], 20.00th=[ 5407], 00:41:04.813 | 30.00th=[ 6849], 40.00th=[ 9110], 50.00th=[11469], 60.00th=[18744], 00:41:04.813 | 70.00th=[24249], 80.00th=[34866], 90.00th=[41681], 95.00th=[48497], 00:41:04.813 | 99.00th=[79168], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:41:04.813 | 99.99th=[83362] 00:41:04.813 bw ( KiB/s): min=16344, max=16384, per=26.85%, avg=16364.00, stdev=28.28, samples=2 00:41:04.813 iops : min= 4086, max= 4096, avg=4091.00, stdev= 7.07, samples=2 00:41:04.813 lat (usec) : 500=0.10%, 750=1.23%, 1000=0.38% 00:41:04.813 lat (msec) : 2=1.13%, 4=2.87%, 10=30.31%, 20=39.37%, 50=21.69% 00:41:04.813 lat (msec) : 100=2.91% 00:41:04.813 cpu : usr=2.10%, sys=4.59%, ctx=454, majf=0, minf=1 00:41:04.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:04.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:04.813 issued rwts: total=3706,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:04.813 00:41:04.813 Run status group 0 (all jobs): 00:41:04.813 READ: bw=53.9MiB/s (56.5MB/s), 10.4MiB/s-15.9MiB/s (11.0MB/s-16.7MB/s), io=54.2MiB (56.8MB), run=1003-1005msec 00:41:04.813 WRITE: bw=59.5MiB/s (62.4MB/s), 12.0MiB/s-17.7MiB/s (12.5MB/s-18.6MB/s), io=59.8MiB (62.7MB), run=1003-1005msec 00:41:04.813 00:41:04.813 Disk stats (read/write): 00:41:04.813 nvme0n1: ios=3121/3175, merge=0/0, ticks=19321/26782, in_queue=46103, util=85.97% 00:41:04.813 nvme0n2: ios=2474/2560, merge=0/0, ticks=25697/44763, in_queue=70460, util=88.87% 00:41:04.813 nvme0n3: ios=3655/4096, merge=0/0, ticks=33638/38977, in_queue=72615, util=91.51% 00:41:04.813 nvme0n4: ios=2734/3072, merge=0/0, ticks=36748/64548, in_queue=101296, util=95.45% 00:41:04.813 06:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:04.814 06:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1277006 00:41:04.814 06:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:04.814 06:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:04.814 [global] 00:41:04.814 thread=1 00:41:04.814 invalidate=1 00:41:04.814 rw=read 00:41:04.814 time_based=1 00:41:04.814 runtime=10 00:41:04.814 ioengine=libaio 00:41:04.814 direct=1 00:41:04.814 bs=4096 00:41:04.814 iodepth=1 00:41:04.814 norandommap=1 00:41:04.814 numjobs=1 00:41:04.814 00:41:04.814 [job0] 00:41:04.814 filename=/dev/nvme0n1 00:41:04.814 [job1] 00:41:04.814 filename=/dev/nvme0n2 00:41:04.814 [job2] 00:41:04.814 filename=/dev/nvme0n3 00:41:04.814 [job3] 00:41:04.814 filename=/dev/nvme0n4 00:41:04.814 Could not set queue depth (nvme0n1) 00:41:04.814 Could not set queue depth (nvme0n2) 00:41:04.814 Could not set queue depth (nvme0n3) 00:41:04.814 Could not set queue depth (nvme0n4) 00:41:04.814 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.814 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.814 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.814 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:04.814 fio-3.35 00:41:04.814 Starting 4 threads 00:41:08.092 06:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:08.092 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=708608, buflen=4096 00:41:08.092 fio: pid=1277313, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:08.092 06:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:08.092 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.092 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:08.092 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:41:08.092 fio: pid=1277307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:08.350 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32464896, buflen=4096 00:41:08.350 fio: pid=1277279, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:08.350 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.350 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:08.608 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=34066432, buflen=4096 00:41:08.608 fio: pid=1277292, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:41:08.608 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.608 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:08.608 00:41:08.608 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277279: Sun Dec 15 06:31:28 2024 00:41:08.608 read: IOPS=2474, BW=9895KiB/s (10.1MB/s)(31.0MiB/3204msec) 00:41:08.608 slat (usec): min=5, max=23530, avg=13.22, stdev=325.14 00:41:08.608 clat (usec): min=177, max=41884, avg=387.09, stdev=2585.78 00:41:08.608 lat (usec): min=184, max=41891, avg=400.32, stdev=2606.64 00:41:08.608 clat percentiles (usec): 00:41:08.608 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:41:08.608 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:41:08.608 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 239], 95.00th=[ 243], 00:41:08.608 | 99.00th=[ 289], 99.50th=[ 351], 99.90th=[41157], 99.95th=[41157], 00:41:08.608 | 99.99th=[41681] 00:41:08.608 bw ( KiB/s): min= 96, max=16848, per=49.63%, avg=9596.83, stdev=7733.76, samples=6 00:41:08.608 iops : min= 24, max= 4212, avg=2399.17, stdev=1933.42, samples=6 00:41:08.608 lat (usec) : 250=97.26%, 500=2.30%, 750=0.03% 00:41:08.608 lat (msec) : 50=0.40% 00:41:08.608 cpu : usr=0.59%, sys=2.31%, ctx=7935, majf=0, minf=1 00:41:08.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.608 issued rwts: total=7927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.608 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.608 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1277292: Sun Dec 15 06:31:28 2024 00:41:08.608 read: IOPS=2438, BW=9753KiB/s (9987kB/s)(32.5MiB/3411msec) 00:41:08.608 slat (usec): min=7, max=18823, avg=12.23, stdev=218.75 00:41:08.608 clat (usec): min=169, max=42044, avg=395.71, stdev=2648.84 00:41:08.608 lat (usec): min=194, max=60030, avg=407.16, stdev=2695.47 00:41:08.608 clat percentiles (usec): 00:41:08.609 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:41:08.609 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:41:08.609 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 239], 95.00th=[ 245], 00:41:08.609 | 99.00th=[ 262], 99.50th=[ 302], 99.90th=[41157], 99.95th=[41681], 00:41:08.609 | 99.99th=[42206] 00:41:08.609 bw ( KiB/s): min= 104, max=17392, per=56.26%, avg=10878.00, stdev=7127.69, samples=6 00:41:08.609 iops : min= 26, max= 4348, avg=2719.50, stdev=1781.92, samples=6 00:41:08.609 lat (usec) : 250=97.51%, 500=2.06% 00:41:08.609 lat (msec) : 50=0.42% 00:41:08.609 cpu : usr=1.64%, sys=4.22%, ctx=8321, majf=0, minf=2 00:41:08.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 issued rwts: total=8318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.609 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277307: Sun Dec 15 06:31:28 2024 00:41:08.609 read: IOPS=24, BW=97.9KiB/s (100kB/s)(292KiB/2984msec) 00:41:08.609 slat (usec): min=12, max=8824, avg=143.30, stdev=1023.01 00:41:08.609 clat (usec): min=505, max=41881, avg=40432.27, stdev=4739.45 00:41:08.609 lat (usec): min=538, max=49935, avg=40577.22, stdev=4866.17 00:41:08.609 clat percentiles (usec): 00:41:08.609 | 1.00th=[ 506], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:08.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.609 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:08.609 | 99.99th=[41681] 00:41:08.609 bw ( KiB/s): min= 96, max= 104, per=0.51%, avg=99.20, stdev= 4.38, samples=5 00:41:08.609 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:41:08.609 lat (usec) : 750=1.35% 00:41:08.609 lat (msec) : 50=97.30% 00:41:08.609 cpu : usr=0.13%, sys=0.00%, ctx=76, majf=0, minf=2 00:41:08.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.609 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277313: Sun Dec 15 06:31:28 2024 00:41:08.609 read: IOPS=63, BW=252KiB/s (258kB/s)(692KiB/2744msec) 00:41:08.609 slat (nsec): min=5364, max=31588, avg=12926.39, stdev=7616.82 00:41:08.609 clat (usec): min=207, max=41913, avg=15721.38, stdev=19759.24 00:41:08.609 lat (usec): min=214, max=41936, avg=15734.26, stdev=19765.66 00:41:08.609 clat percentiles (usec): 00:41:08.609 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 249], 00:41:08.609 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 338], 00:41:08.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.609 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:08.609 | 99.99th=[41681] 00:41:08.609 bw ( KiB/s): min= 96, max= 104, per=0.52%, avg=100.80, stdev= 4.38, samples=5 00:41:08.609 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:41:08.609 lat (usec) : 250=21.26%, 500=40.23% 00:41:08.609 lat (msec) : 50=37.93% 00:41:08.609 cpu : usr=0.04%, sys=0.07%, ctx=175, majf=0, minf=2 00:41:08.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.609 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.609 00:41:08.609 Run status group 0 (all jobs): 00:41:08.609 READ: bw=18.9MiB/s (19.8MB/s), 97.9KiB/s-9895KiB/s (100kB/s-10.1MB/s), io=64.4MiB (67.5MB), run=2744-3411msec 00:41:08.609 00:41:08.609 Disk stats (read/write): 00:41:08.609 nvme0n1: ios=7551/0, merge=0/0, ticks=3890/0, in_queue=3890, util=98.71% 00:41:08.609 nvme0n2: ios=8353/0, merge=0/0, ticks=3160/0, in_queue=3160, util=96.24% 00:41:08.609 nvme0n3: ios=108/0, merge=0/0, ticks=3662/0, in_queue=3662, util=99.83% 00:41:08.609 nvme0n4: ios=103/0, merge=0/0, ticks=3417/0, in_queue=3417, util=100.00% 00:41:08.609 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.609 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:08.867 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.867 06:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:09.125 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.125 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:09.383 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:09.383 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1277006 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:09.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:09.641 nvmf hotplug test: fio failed as expected 00:41:09.641 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:09.899 rmmod nvme_tcp 00:41:09.899 rmmod nvme_fabrics 00:41:09.899 rmmod nvme_keyring 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1274584 ']' 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1274584 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1274584 ']' 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1274584 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:09.899 06:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274584 00:41:09.899 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:09.899 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:09.899 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274584' 00:41:09.899 killing process with pid 1274584 00:41:09.899 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1274584 00:41:09.899 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1274584 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.158 06:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:12.694 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:12.694 00:41:12.694 real 0m25.865s 00:41:12.694 user 1m31.773s 00:41:12.694 sys 0m10.836s 00:41:12.694 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:12.694 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:12.694 ************************************ 00:41:12.694 END TEST nvmf_fio_target 00:41:12.694 ************************************ 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:12.695 ************************************ 00:41:12.695 START TEST nvmf_bdevio 00:41:12.695 ************************************ 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:12.695 * Looking for test storage... 00:41:12.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:12.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.695 --rc genhtml_branch_coverage=1 00:41:12.695 --rc genhtml_function_coverage=1 00:41:12.695 --rc genhtml_legend=1 00:41:12.695 --rc geninfo_all_blocks=1 00:41:12.695 --rc geninfo_unexecuted_blocks=1 00:41:12.695 00:41:12.695 ' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:12.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.695 --rc genhtml_branch_coverage=1 00:41:12.695 --rc genhtml_function_coverage=1 00:41:12.695 --rc genhtml_legend=1 00:41:12.695 --rc geninfo_all_blocks=1 00:41:12.695 --rc geninfo_unexecuted_blocks=1 00:41:12.695 00:41:12.695 ' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:12.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.695 --rc genhtml_branch_coverage=1 00:41:12.695 --rc genhtml_function_coverage=1 00:41:12.695 --rc genhtml_legend=1 00:41:12.695 --rc geninfo_all_blocks=1 00:41:12.695 --rc geninfo_unexecuted_blocks=1 00:41:12.695 00:41:12.695 ' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:12.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:12.695 --rc genhtml_branch_coverage=1 00:41:12.695 --rc genhtml_function_coverage=1 00:41:12.695 --rc genhtml_legend=1 00:41:12.695 --rc geninfo_all_blocks=1 00:41:12.695 --rc geninfo_unexecuted_blocks=1 00:41:12.695 00:41:12.695 ' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.695 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:12.696 06:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.969 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:17.970 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.970 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:18.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:18.229 Found net devices under 0000:af:00.0: cvl_0_0 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:18.229 Found net devices under 0000:af:00.1: cvl_0_1 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:18.229 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:18.230 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:18.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:18.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:41:18.489 00:41:18.489 --- 10.0.0.2 ping statistics --- 00:41:18.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.489 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:18.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:18.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:41:18.489 00:41:18.489 --- 10.0.0.1 ping statistics --- 00:41:18.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.489 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1281514 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1281514 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1281514 ']' 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.489 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.489 [2024-12-15 06:31:38.537766] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:18.489 [2024-12-15 06:31:38.538657] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:18.489 [2024-12-15 06:31:38.538700] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.489 [2024-12-15 06:31:38.615930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:18.748 [2024-12-15 06:31:38.638642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.748 [2024-12-15 06:31:38.638676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.748 [2024-12-15 06:31:38.638683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:18.748 [2024-12-15 06:31:38.638689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:18.748 [2024-12-15 06:31:38.638694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.748 [2024-12-15 06:31:38.640172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:18.748 [2024-12-15 06:31:38.640284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:18.748 [2024-12-15 06:31:38.640391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:18.748 [2024-12-15 06:31:38.640392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:18.748 [2024-12-15 06:31:38.702257] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:18.748 [2024-12-15 06:31:38.703130] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:18.748 [2024-12-15 06:31:38.703344] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:18.748 [2024-12-15 06:31:38.703766] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:18.748 [2024-12-15 06:31:38.703807] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 [2024-12-15 06:31:38.769073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 Malloc0 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.748 [2024-12-15 06:31:38.853151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:18.748 { 00:41:18.748 "params": { 00:41:18.748 "name": "Nvme$subsystem", 00:41:18.748 "trtype": "$TEST_TRANSPORT", 00:41:18.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:18.748 "adrfam": "ipv4", 00:41:18.748 "trsvcid": "$NVMF_PORT", 00:41:18.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:18.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:18.748 "hdgst": ${hdgst:-false}, 00:41:18.748 "ddgst": ${ddgst:-false} 00:41:18.748 }, 00:41:18.748 "method": "bdev_nvme_attach_controller" 00:41:18.748 } 00:41:18.748 EOF 00:41:18.748 )") 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:18.748 06:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:18.748 "params": { 00:41:18.748 "name": "Nvme1", 00:41:18.748 "trtype": "tcp", 00:41:18.748 "traddr": "10.0.0.2", 00:41:18.748 "adrfam": "ipv4", 00:41:18.748 "trsvcid": "4420", 00:41:18.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:18.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:18.748 "hdgst": false, 00:41:18.748 "ddgst": false 00:41:18.748 }, 00:41:18.748 "method": "bdev_nvme_attach_controller" 00:41:18.748 }' 00:41:19.006 [2024-12-15 06:31:38.903020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:19.006 [2024-12-15 06:31:38.903068] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281537 ] 00:41:19.006 [2024-12-15 06:31:38.976497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:19.006 [2024-12-15 06:31:39.001293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.006 [2024-12-15 06:31:39.001404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.006 [2024-12-15 06:31:39.001405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:19.264 I/O targets: 00:41:19.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:19.264 00:41:19.264 00:41:19.264 CUnit - A unit testing framework for C - Version 2.1-3 00:41:19.264 http://cunit.sourceforge.net/ 00:41:19.264 00:41:19.264 00:41:19.264 Suite: bdevio tests on: Nvme1n1 00:41:19.264 Test: blockdev write read block ...passed 00:41:19.264 Test: blockdev write zeroes read block ...passed 00:41:19.264 Test: blockdev write zeroes read no split ...passed 00:41:19.264 Test: blockdev write zeroes read split ...passed 00:41:19.264 Test: blockdev write zeroes read split partial ...passed 00:41:19.264 Test: blockdev reset ...[2024-12-15 06:31:39.295038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:19.265 [2024-12-15 06:31:39.295106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e12340 (9): Bad file descriptor 00:41:19.523 [2024-12-15 06:31:39.428796] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:19.523 passed 00:41:19.523 Test: blockdev write read 8 blocks ...passed 00:41:19.523 Test: blockdev write read size > 128k ...passed 00:41:19.523 Test: blockdev write read invalid size ...passed 00:41:19.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:19.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:19.523 Test: blockdev write read max offset ...passed 00:41:19.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:19.523 Test: blockdev writev readv 8 blocks ...passed 00:41:19.523 Test: blockdev writev readv 30 x 1block ...passed 00:41:19.782 Test: blockdev writev readv block ...passed 00:41:19.782 Test: blockdev writev readv size > 128k ...passed 00:41:19.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:19.782 Test: blockdev comparev and writev ...[2024-12-15 06:31:39.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.679708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.679995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.680008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:19.782 [2024-12-15 06:31:39.680028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:19.782 passed 00:41:19.782 Test: blockdev nvme passthru rw ...passed 00:41:19.782 Test: blockdev nvme passthru vendor specific ...[2024-12-15 06:31:39.763288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:19.782 [2024-12-15 06:31:39.763305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.763421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:19.782 [2024-12-15 06:31:39.763431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.763539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:19.782 [2024-12-15 06:31:39.763549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:19.782 [2024-12-15 06:31:39.763655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:19.782 [2024-12-15 06:31:39.763665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:19.782 passed 00:41:19.782 Test: blockdev nvme admin passthru ...passed 00:41:19.782 Test: blockdev copy ...passed 00:41:19.782 00:41:19.782 Run Summary: Type Total Ran Passed Failed Inactive 00:41:19.782 suites 1 1 n/a 0 0 00:41:19.782 tests 23 23 23 0 0 00:41:19.782 asserts 152 152 152 0 n/a 00:41:19.782 00:41:19.782 Elapsed time = 1.267 seconds 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:20.041 06:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:20.041 rmmod nvme_tcp 00:41:20.041 rmmod nvme_fabrics 00:41:20.041 rmmod nvme_keyring 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1281514 ']' 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1281514 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1281514 ']' 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1281514 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281514 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281514' 00:41:20.041 killing process with pid 1281514 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1281514 00:41:20.041 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1281514 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.300 06:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:22.833 00:41:22.833 real 0m10.019s 00:41:22.833 user 0m8.956s 00:41:22.833 sys 0m5.185s 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:22.833 ************************************ 00:41:22.833 END TEST nvmf_bdevio 00:41:22.833 ************************************ 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:22.833 00:41:22.833 real 4m30.472s 00:41:22.833 user 9m3.524s 00:41:22.833 sys 1m49.055s 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.833 06:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:22.833 ************************************ 00:41:22.833 END TEST nvmf_target_core_interrupt_mode 00:41:22.833 ************************************ 00:41:22.833 06:31:42 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:22.833 06:31:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:22.833 06:31:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.833 06:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:22.833 ************************************ 00:41:22.833 START TEST nvmf_interrupt 00:41:22.833 ************************************ 00:41:22.833 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:22.833 * Looking for test storage... 00:41:22.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:22.833 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:22.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.834 --rc genhtml_branch_coverage=1 00:41:22.834 --rc genhtml_function_coverage=1 00:41:22.834 --rc genhtml_legend=1 00:41:22.834 --rc geninfo_all_blocks=1 00:41:22.834 --rc geninfo_unexecuted_blocks=1 00:41:22.834 00:41:22.834 ' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:22.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.834 --rc genhtml_branch_coverage=1 00:41:22.834 --rc genhtml_function_coverage=1 00:41:22.834 --rc genhtml_legend=1 00:41:22.834 --rc geninfo_all_blocks=1 00:41:22.834 --rc geninfo_unexecuted_blocks=1 00:41:22.834 00:41:22.834 ' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:22.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.834 --rc genhtml_branch_coverage=1 00:41:22.834 --rc genhtml_function_coverage=1 00:41:22.834 --rc genhtml_legend=1 00:41:22.834 --rc geninfo_all_blocks=1 00:41:22.834 --rc geninfo_unexecuted_blocks=1 00:41:22.834 00:41:22.834 ' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:22.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:22.834 --rc genhtml_branch_coverage=1 00:41:22.834 --rc genhtml_function_coverage=1 00:41:22.834 --rc genhtml_legend=1 00:41:22.834 --rc geninfo_all_blocks=1 00:41:22.834 --rc geninfo_unexecuted_blocks=1 00:41:22.834 00:41:22.834 ' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:22.834 06:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:29.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:29.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:29.397 Found net devices under 0000:af:00.0: cvl_0_0 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.397 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:29.398 Found net devices under 0000:af:00.1: cvl_0_1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:29.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:29.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:41:29.398 00:41:29.398 --- 10.0.0.2 ping statistics --- 00:41:29.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.398 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:29.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:29.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:41:29.398 00:41:29.398 --- 10.0.0.1 ping statistics --- 00:41:29.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.398 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1285232 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1285232 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1285232 ']' 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 [2024-12-15 06:31:48.590276] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:29.398 [2024-12-15 06:31:48.591173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:29.398 [2024-12-15 06:31:48.591206] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:29.398 [2024-12-15 06:31:48.667110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:29.398 [2024-12-15 06:31:48.689120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:29.398 [2024-12-15 06:31:48.689156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:29.398 [2024-12-15 06:31:48.689164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:29.398 [2024-12-15 06:31:48.689170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:29.398 [2024-12-15 06:31:48.689175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:29.398 [2024-12-15 06:31:48.690249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.398 [2024-12-15 06:31:48.690251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.398 [2024-12-15 06:31:48.752189] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:29.398 [2024-12-15 06:31:48.752752] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:29.398 [2024-12-15 06:31:48.752934] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:29.398 5000+0 records in 00:41:29.398 5000+0 records out 00:41:29.398 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0167864 s, 610 MB/s 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 AIO0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 [2024-12-15 06:31:48.883034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:29.398 [2024-12-15 06:31:48.923349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285232 0 00:41:29.398 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 0 idle 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:29.399 06:31:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285232 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285232 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285232 1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 1 idle 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285236 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285236 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1285278 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285232 0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285232 0 busy 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285232 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.42 reactor_0' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285232 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.42 reactor_0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285232 1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285232 1 busy 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:29.399 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285236 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.28 reactor_1' 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285236 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.28 reactor_1 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.657 06:31:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1285278 00:41:39.628 Initializing NVMe Controllers 00:41:39.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:39.628 Controller IO queue size 256, less than required. 00:41:39.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:39.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:39.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:39.628 Initialization complete. Launching workers. 00:41:39.628 ======================================================== 00:41:39.628 Latency(us) 00:41:39.628 Device Information : IOPS MiB/s Average min max 00:41:39.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16841.89 65.79 15207.56 2903.62 29619.06 00:41:39.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16979.69 66.33 15080.69 7303.03 25971.71 00:41:39.628 ======================================================== 00:41:39.628 Total : 33821.59 132.12 15143.87 2903.62 29619.06 00:41:39.628 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285232 0 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 0 idle 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:39.628 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285232 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.22 reactor_0' 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285232 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.22 reactor_0 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285232 1 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 1 idle 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:39.629 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285236 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:10.00 reactor_1' 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285236 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:10.00 reactor_1 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.888 06:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:40.146 06:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:40.146 06:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:40.146 06:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:40.146 06:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:40.146 06:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285232 0 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 0 idle 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:42.050 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285232 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.46 reactor_0' 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285232 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.46 reactor_0 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285232 1 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285232 1 idle 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285232 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285232 -w 256 00:41:42.309 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285236 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1' 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285236 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:42.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:42.568 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:42.827 rmmod nvme_tcp 00:41:42.827 rmmod nvme_fabrics 00:41:42.827 rmmod nvme_keyring 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1285232 ']' 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1285232 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1285232 ']' 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1285232 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1285232 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1285232' 00:41:42.827 killing process with pid 1285232 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1285232 00:41:42.827 06:32:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1285232 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:43.086 06:32:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:45.619 06:32:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:45.619 00:41:45.619 real 0m22.705s 00:41:45.619 user 0m39.546s 00:41:45.619 sys 0m8.398s 00:41:45.619 06:32:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.619 06:32:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:45.619 ************************************ 00:41:45.619 END TEST nvmf_interrupt 00:41:45.619 ************************************ 00:41:45.619 00:41:45.619 real 35m21.248s 00:41:45.619 user 86m7.103s 00:41:45.619 sys 10m30.556s 00:41:45.619 06:32:05 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.619 06:32:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.619 ************************************ 00:41:45.619 END TEST nvmf_tcp 00:41:45.619 ************************************ 00:41:45.619 06:32:05 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:45.619 06:32:05 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:45.619 06:32:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:45.619 06:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.619 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:41:45.619 ************************************ 00:41:45.619 START TEST spdkcli_nvmf_tcp 00:41:45.619 ************************************ 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:45.619 * Looking for test storage... 00:41:45.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.619 --rc genhtml_branch_coverage=1 00:41:45.619 --rc genhtml_function_coverage=1 00:41:45.619 --rc genhtml_legend=1 00:41:45.619 --rc geninfo_all_blocks=1 00:41:45.619 --rc geninfo_unexecuted_blocks=1 00:41:45.619 00:41:45.619 ' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.619 --rc genhtml_branch_coverage=1 00:41:45.619 --rc genhtml_function_coverage=1 00:41:45.619 --rc genhtml_legend=1 00:41:45.619 --rc geninfo_all_blocks=1 00:41:45.619 --rc geninfo_unexecuted_blocks=1 00:41:45.619 00:41:45.619 ' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.619 --rc genhtml_branch_coverage=1 00:41:45.619 --rc genhtml_function_coverage=1 00:41:45.619 --rc genhtml_legend=1 00:41:45.619 --rc geninfo_all_blocks=1 00:41:45.619 --rc geninfo_unexecuted_blocks=1 00:41:45.619 00:41:45.619 ' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:45.619 --rc genhtml_branch_coverage=1 00:41:45.619 --rc genhtml_function_coverage=1 00:41:45.619 --rc genhtml_legend=1 00:41:45.619 --rc geninfo_all_blocks=1 00:41:45.619 --rc geninfo_unexecuted_blocks=1 00:41:45.619 00:41:45.619 ' 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:45.619 06:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:45.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1287945 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1287945 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1287945 ']' 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:45.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.620 [2024-12-15 06:32:05.551468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:45.620 [2024-12-15 06:32:05.551515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287945 ] 00:41:45.620 [2024-12-15 06:32:05.624743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:45.620 [2024-12-15 06:32:05.649012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:45.620 [2024-12-15 06:32:05.649019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:45.620 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:45.878 06:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:45.878 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:45.878 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:45.878 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:45.878 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:45.878 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:45.878 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:45.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:45.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:45.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:45.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:45.878 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:45.878 ' 00:41:48.406 [2024-12-15 06:32:08.472124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:49.778 [2024-12-15 06:32:09.812481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:52.304 [2024-12-15 06:32:12.292239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:54.830 [2024-12-15 06:32:14.475021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:56.210 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:56.210 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:56.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:56.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:56.210 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:56.210 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:56.210 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:56.210 06:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:56.832 06:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:56.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:56.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:56.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:56.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:56.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:56.832 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:56.832 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:56.832 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:56.832 ' 00:42:02.158 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:02.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:02.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:02.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:02.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:02.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:02.159 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:02.159 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:02.159 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1287945 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1287945 ']' 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1287945 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287945 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287945' 00:42:02.417 killing process with pid 1287945 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1287945 00:42:02.417 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1287945 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1287945 ']' 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1287945 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1287945 ']' 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1287945 00:42:02.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1287945) - No such process 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1287945 is not found' 00:42:02.675 Process with pid 1287945 is not found 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:02.675 00:42:02.675 real 0m17.317s 00:42:02.675 user 0m38.211s 00:42:02.675 sys 0m0.804s 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.675 06:32:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:02.675 ************************************ 00:42:02.675 END TEST spdkcli_nvmf_tcp 00:42:02.675 ************************************ 00:42:02.675 06:32:22 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:02.675 06:32:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:02.675 06:32:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.675 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:42:02.675 ************************************ 00:42:02.675 START TEST nvmf_identify_passthru 00:42:02.675 ************************************ 00:42:02.675 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:02.675 * Looking for test storage... 00:42:02.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:02.675 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:02.675 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:02.675 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.934 06:32:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.934 --rc genhtml_branch_coverage=1 00:42:02.934 --rc genhtml_function_coverage=1 00:42:02.934 --rc genhtml_legend=1 00:42:02.934 --rc geninfo_all_blocks=1 00:42:02.934 --rc geninfo_unexecuted_blocks=1 00:42:02.934 00:42:02.934 ' 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.934 --rc genhtml_branch_coverage=1 00:42:02.934 --rc genhtml_function_coverage=1 00:42:02.934 --rc genhtml_legend=1 00:42:02.934 --rc geninfo_all_blocks=1 00:42:02.934 --rc geninfo_unexecuted_blocks=1 00:42:02.934 00:42:02.934 ' 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.934 --rc genhtml_branch_coverage=1 00:42:02.934 --rc genhtml_function_coverage=1 00:42:02.934 --rc genhtml_legend=1 00:42:02.934 --rc geninfo_all_blocks=1 00:42:02.934 --rc geninfo_unexecuted_blocks=1 00:42:02.934 00:42:02.934 ' 00:42:02.934 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.935 --rc genhtml_branch_coverage=1 00:42:02.935 --rc genhtml_function_coverage=1 00:42:02.935 --rc genhtml_legend=1 00:42:02.935 --rc geninfo_all_blocks=1 00:42:02.935 --rc geninfo_unexecuted_blocks=1 00:42:02.935 00:42:02.935 ' 00:42:02.935 06:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:02.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.935 06:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:02.935 06:32:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.935 06:32:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.935 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:02.935 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:02.935 06:32:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:02.935 06:32:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:09.509 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:09.509 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.509 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:09.510 Found net devices under 0000:af:00.0: cvl_0_0 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:09.510 Found net devices under 0000:af:00.1: cvl_0_1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:09.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:09.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:42:09.510 00:42:09.510 --- 10.0.0.2 ping statistics --- 00:42:09.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.510 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:09.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:09.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:42:09.510 00:42:09.510 --- 10.0.0.1 ping statistics --- 00:42:09.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:09.510 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:09.510 06:32:28 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:09.510 06:32:28 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:09.510 06:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:13.698 06:32:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:13.698 06:32:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:13.698 06:32:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:13.698 06:32:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:16.981 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:16.981 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:16.981 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.982 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.982 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1294999 00:42:16.982 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:16.982 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:16.982 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1294999 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1294999 ']' 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:16.982 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.239 [2024-12-15 06:32:37.155507] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:17.239 [2024-12-15 06:32:37.155553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.239 [2024-12-15 06:32:37.231403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:17.239 [2024-12-15 06:32:37.254978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.240 [2024-12-15 06:32:37.255024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.240 [2024-12-15 06:32:37.255031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.240 [2024-12-15 06:32:37.255037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.240 [2024-12-15 06:32:37.255042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.240 [2024-12-15 06:32:37.256674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.240 [2024-12-15 06:32:37.256781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:17.240 [2024-12-15 06:32:37.256891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.240 [2024-12-15 06:32:37.256892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:17.240 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.240 INFO: Log level set to 20 00:42:17.240 INFO: Requests: 00:42:17.240 { 00:42:17.240 "jsonrpc": "2.0", 00:42:17.240 "method": "nvmf_set_config", 00:42:17.240 "id": 1, 00:42:17.240 "params": { 00:42:17.240 "admin_cmd_passthru": { 00:42:17.240 "identify_ctrlr": true 00:42:17.240 } 00:42:17.240 } 00:42:17.240 } 00:42:17.240 00:42:17.240 INFO: response: 00:42:17.240 { 00:42:17.240 "jsonrpc": "2.0", 00:42:17.240 "id": 1, 00:42:17.240 "result": true 00:42:17.240 } 00:42:17.240 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.240 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.240 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.240 INFO: Setting log level to 20 00:42:17.240 INFO: Setting log level to 20 00:42:17.240 INFO: Log level set to 20 00:42:17.240 INFO: Log level set to 20 00:42:17.240 INFO: Requests: 00:42:17.240 { 00:42:17.240 "jsonrpc": "2.0", 00:42:17.240 "method": "framework_start_init", 00:42:17.240 "id": 1 00:42:17.240 } 00:42:17.240 00:42:17.240 INFO: Requests: 00:42:17.240 { 00:42:17.240 "jsonrpc": "2.0", 00:42:17.240 "method": "framework_start_init", 00:42:17.240 "id": 1 00:42:17.240 } 00:42:17.240 00:42:17.240 [2024-12-15 06:32:37.375153] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:17.497 INFO: response: 00:42:17.497 { 00:42:17.497 "jsonrpc": "2.0", 00:42:17.497 "id": 1, 00:42:17.497 "result": true 00:42:17.497 } 00:42:17.497 00:42:17.497 INFO: response: 00:42:17.497 { 00:42:17.497 "jsonrpc": "2.0", 00:42:17.497 "id": 1, 00:42:17.497 "result": true 00:42:17.497 } 00:42:17.497 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.497 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.497 INFO: Setting log level to 40 00:42:17.497 INFO: Setting log level to 40 00:42:17.497 INFO: Setting log level to 40 00:42:17.497 [2024-12-15 06:32:37.388439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.497 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.497 06:32:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.497 06:32:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.773 Nvme0n1 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.773 [2024-12-15 06:32:40.300309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.773 [ 00:42:20.773 { 00:42:20.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:20.773 "subtype": "Discovery", 00:42:20.773 "listen_addresses": [], 00:42:20.773 "allow_any_host": true, 00:42:20.773 "hosts": [] 00:42:20.773 }, 00:42:20.773 { 00:42:20.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:20.773 "subtype": "NVMe", 00:42:20.773 "listen_addresses": [ 00:42:20.773 { 00:42:20.773 "trtype": "TCP", 00:42:20.773 "adrfam": "IPv4", 00:42:20.773 "traddr": "10.0.0.2", 00:42:20.773 "trsvcid": "4420" 00:42:20.773 } 00:42:20.773 ], 00:42:20.773 "allow_any_host": true, 00:42:20.773 "hosts": [], 00:42:20.773 "serial_number": "SPDK00000000000001", 00:42:20.773 "model_number": "SPDK bdev Controller", 00:42:20.773 "max_namespaces": 1, 00:42:20.773 "min_cntlid": 1, 00:42:20.773 "max_cntlid": 65519, 00:42:20.773 "namespaces": [ 00:42:20.773 { 00:42:20.773 "nsid": 1, 00:42:20.773 "bdev_name": "Nvme0n1", 00:42:20.773 "name": "Nvme0n1", 00:42:20.773 "nguid": "D57F1E3855B14837BA8474676C1BAE4C", 00:42:20.773 "uuid": "d57f1e38-55b1-4837-ba84-74676c1bae4c" 00:42:20.773 } 00:42:20.773 ] 00:42:20.773 } 00:42:20.773 ] 00:42:20.773 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:20.773 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:20.774 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.774 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:20.774 06:32:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:20.774 rmmod nvme_tcp 00:42:20.774 rmmod nvme_fabrics 00:42:20.774 rmmod nvme_keyring 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1294999 ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1294999 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1294999 ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1294999 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1294999 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1294999' 00:42:20.774 killing process with pid 1294999 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1294999 00:42:20.774 06:32:40 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1294999 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:22.146 06:32:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:22.146 06:32:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:22.146 06:32:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:24.681 06:32:44 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:24.681 00:42:24.681 real 0m21.667s 00:42:24.681 user 0m27.458s 00:42:24.681 sys 0m5.276s 00:42:24.681 06:32:44 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:24.681 06:32:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:24.681 ************************************ 00:42:24.681 END TEST nvmf_identify_passthru 00:42:24.681 ************************************ 00:42:24.681 06:32:44 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:24.681 06:32:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:24.681 06:32:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:24.681 06:32:44 -- common/autotest_common.sh@10 -- # set +x 00:42:24.681 ************************************ 00:42:24.681 START TEST nvmf_dif 00:42:24.681 ************************************ 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:24.681 * Looking for test storage... 00:42:24.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:24.681 --rc genhtml_branch_coverage=1 00:42:24.681 --rc genhtml_function_coverage=1 00:42:24.681 --rc genhtml_legend=1 00:42:24.681 --rc geninfo_all_blocks=1 00:42:24.681 --rc geninfo_unexecuted_blocks=1 00:42:24.681 00:42:24.681 ' 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:24.681 --rc genhtml_branch_coverage=1 00:42:24.681 --rc genhtml_function_coverage=1 00:42:24.681 --rc genhtml_legend=1 00:42:24.681 --rc geninfo_all_blocks=1 00:42:24.681 --rc geninfo_unexecuted_blocks=1 00:42:24.681 00:42:24.681 ' 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:24.681 --rc genhtml_branch_coverage=1 00:42:24.681 --rc genhtml_function_coverage=1 00:42:24.681 --rc genhtml_legend=1 00:42:24.681 --rc geninfo_all_blocks=1 00:42:24.681 --rc geninfo_unexecuted_blocks=1 00:42:24.681 00:42:24.681 ' 00:42:24.681 06:32:44 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:24.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:24.681 --rc genhtml_branch_coverage=1 00:42:24.681 --rc genhtml_function_coverage=1 00:42:24.681 --rc genhtml_legend=1 00:42:24.681 --rc geninfo_all_blocks=1 00:42:24.681 --rc geninfo_unexecuted_blocks=1 00:42:24.681 00:42:24.681 ' 00:42:24.681 06:32:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:24.681 06:32:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:24.681 06:32:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:24.681 06:32:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:24.681 06:32:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:24.681 06:32:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:24.681 06:32:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:24.681 06:32:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:24.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:24.682 06:32:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:24.682 06:32:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:24.682 06:32:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:24.682 06:32:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:24.682 06:32:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:24.682 06:32:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:24.682 06:32:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:24.682 06:32:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:24.682 06:32:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:31.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:31.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:31.252 Found net devices under 0000:af:00.0: cvl_0_0 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:31.252 06:32:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:31.253 Found net devices under 0000:af:00.1: cvl_0_1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:31.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:31.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:42:31.253 00:42:31.253 --- 10.0.0.2 ping statistics --- 00:42:31.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:31.253 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:31.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:31.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:42:31.253 00:42:31.253 --- 10.0.0.1 ping statistics --- 00:42:31.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:31.253 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:31.253 06:32:50 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:33.158 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:33.158 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:33.158 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:33.158 06:32:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:33.417 06:32:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:33.417 06:32:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:33.417 06:32:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.417 06:32:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1300398 00:42:33.417 06:32:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1300398 00:42:33.417 06:32:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1300398 ']' 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:33.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:33.417 06:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.417 [2024-12-15 06:32:53.387840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:33.417 [2024-12-15 06:32:53.387886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:33.417 [2024-12-15 06:32:53.467280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.417 [2024-12-15 06:32:53.488348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:33.417 [2024-12-15 06:32:53.488386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:33.417 [2024-12-15 06:32:53.488394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:33.417 [2024-12-15 06:32:53.488401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:33.417 [2024-12-15 06:32:53.488406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:33.417 [2024-12-15 06:32:53.488894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:33.675 06:32:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 06:32:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:33.675 06:32:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:33.675 06:32:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 [2024-12-15 06:32:53.627752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.675 06:32:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 ************************************ 00:42:33.675 START TEST fio_dif_1_default 00:42:33.675 ************************************ 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 bdev_null0 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.675 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:33.676 [2024-12-15 06:32:53.708126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:33.676 { 00:42:33.676 "params": { 00:42:33.676 "name": "Nvme$subsystem", 00:42:33.676 "trtype": "$TEST_TRANSPORT", 00:42:33.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.676 "adrfam": "ipv4", 00:42:33.676 "trsvcid": "$NVMF_PORT", 00:42:33.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.676 "hdgst": ${hdgst:-false}, 00:42:33.676 "ddgst": ${ddgst:-false} 00:42:33.676 }, 00:42:33.676 "method": "bdev_nvme_attach_controller" 00:42:33.676 } 00:42:33.676 EOF 00:42:33.676 )") 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:33.676 "params": { 00:42:33.676 "name": "Nvme0", 00:42:33.676 "trtype": "tcp", 00:42:33.676 "traddr": "10.0.0.2", 00:42:33.676 "adrfam": "ipv4", 00:42:33.676 "trsvcid": "4420", 00:42:33.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.676 "hdgst": false, 00:42:33.676 "ddgst": false 00:42:33.676 }, 00:42:33.676 "method": "bdev_nvme_attach_controller" 00:42:33.676 }' 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:33.676 06:32:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.240 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:34.240 fio-3.35 00:42:34.240 Starting 1 thread 00:42:46.445 00:42:46.445 filename0: (groupid=0, jobs=1): err= 0: pid=1300734: Sun Dec 15 06:33:04 2024 00:42:46.445 read: IOPS=143, BW=576KiB/s (589kB/s)(5776KiB/10034msec) 00:42:46.445 slat (nsec): min=5883, max=26188, avg=6176.38, stdev=731.16 00:42:46.445 clat (usec): min=365, max=43722, avg=27777.31, stdev=19058.23 00:42:46.445 lat (usec): min=371, max=43748, avg=27783.49, stdev=19058.22 00:42:46.445 clat percentiles (usec): 00:42:46.445 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 412], 00:42:46.445 | 30.00th=[ 594], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:42:46.445 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:46.445 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:42:46.445 | 99.99th=[43779] 00:42:46.445 bw ( KiB/s): min= 384, max= 832, per=99.89%, avg=576.00, stdev=184.56, samples=20 00:42:46.445 iops : min= 96, max= 208, avg=144.00, stdev=46.14, samples=20 00:42:46.445 lat (usec) : 500=26.32%, 750=6.37% 00:42:46.445 lat (msec) : 50=67.31% 00:42:46.445 cpu : usr=92.67%, sys=7.07%, ctx=13, majf=0, minf=0 00:42:46.445 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.445 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.445 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:46.445 00:42:46.445 Run status group 0 (all jobs): 00:42:46.445 READ: bw=576KiB/s (589kB/s), 576KiB/s-576KiB/s (589kB/s-589kB/s), io=5776KiB (5915kB), run=10034-10034msec 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.445 00:42:46.445 real 0m11.052s 00:42:46.445 user 0m15.706s 00:42:46.445 sys 0m0.997s 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:46.445 ************************************ 00:42:46.445 END TEST fio_dif_1_default 00:42:46.445 ************************************ 00:42:46.445 06:33:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:46.445 06:33:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:46.445 06:33:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.445 06:33:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:46.445 ************************************ 00:42:46.445 START TEST fio_dif_1_multi_subsystems 00:42:46.445 ************************************ 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.445 bdev_null0 00:42:46.445 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 [2024-12-15 06:33:04.831206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 bdev_null1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:46.446 { 00:42:46.446 "params": { 00:42:46.446 "name": "Nvme$subsystem", 00:42:46.446 "trtype": "$TEST_TRANSPORT", 00:42:46.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:46.446 "adrfam": "ipv4", 00:42:46.446 "trsvcid": "$NVMF_PORT", 00:42:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:46.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:46.446 "hdgst": ${hdgst:-false}, 00:42:46.446 "ddgst": ${ddgst:-false} 00:42:46.446 }, 00:42:46.446 "method": "bdev_nvme_attach_controller" 00:42:46.446 } 00:42:46.446 EOF 00:42:46.446 )") 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:46.446 { 00:42:46.446 "params": { 00:42:46.446 "name": "Nvme$subsystem", 00:42:46.446 "trtype": "$TEST_TRANSPORT", 00:42:46.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:46.446 "adrfam": "ipv4", 00:42:46.446 "trsvcid": "$NVMF_PORT", 00:42:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:46.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:46.446 "hdgst": ${hdgst:-false}, 00:42:46.446 "ddgst": ${ddgst:-false} 00:42:46.446 }, 00:42:46.446 "method": "bdev_nvme_attach_controller" 00:42:46.446 } 00:42:46.446 EOF 00:42:46.446 )") 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:46.446 "params": { 00:42:46.446 "name": "Nvme0", 00:42:46.446 "trtype": "tcp", 00:42:46.446 "traddr": "10.0.0.2", 00:42:46.446 "adrfam": "ipv4", 00:42:46.446 "trsvcid": "4420", 00:42:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:46.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:46.446 "hdgst": false, 00:42:46.446 "ddgst": false 00:42:46.446 }, 00:42:46.446 "method": "bdev_nvme_attach_controller" 00:42:46.446 },{ 00:42:46.446 "params": { 00:42:46.446 "name": "Nvme1", 00:42:46.446 "trtype": "tcp", 00:42:46.446 "traddr": "10.0.0.2", 00:42:46.446 "adrfam": "ipv4", 00:42:46.446 "trsvcid": "4420", 00:42:46.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:46.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:46.446 "hdgst": false, 00:42:46.446 "ddgst": false 00:42:46.446 }, 00:42:46.446 "method": "bdev_nvme_attach_controller" 00:42:46.446 }' 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:46.446 06:33:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.446 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:46.446 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:46.446 fio-3.35 00:42:46.446 Starting 2 threads 00:42:56.414 00:42:56.414 filename0: (groupid=0, jobs=1): err= 0: pid=1302823: Sun Dec 15 06:33:16 2024 00:42:56.414 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10017msec) 00:42:56.414 slat (nsec): min=6022, max=54716, avg=8010.52, stdev=3171.24 00:42:56.414 clat (usec): min=419, max=42529, avg=40692.05, stdev=3634.35 00:42:56.414 lat (usec): min=426, max=42535, avg=40700.06, stdev=3634.19 00:42:56.414 clat percentiles (usec): 00:42:56.414 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:56.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:56.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:56.414 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:56.414 | 99.99th=[42730] 00:42:56.414 bw ( KiB/s): min= 384, max= 416, per=50.09%, avg=392.00, stdev=14.22, samples=20 00:42:56.414 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:42:56.414 lat (usec) : 500=0.41%, 1000=0.41% 00:42:56.414 lat (msec) : 50=99.19% 00:42:56.414 cpu : usr=96.71%, sys=3.04%, ctx=13, majf=0, minf=85 00:42:56.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:56.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.414 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:56.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:56.414 filename1: (groupid=0, jobs=1): err= 0: pid=1302825: Sun Dec 15 06:33:16 2024 00:42:56.414 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:42:56.414 slat (nsec): min=6011, max=42709, avg=7955.26, stdev=2782.93 00:42:56.414 clat (usec): min=40827, max=42071, avg=41014.20, stdev=191.82 00:42:56.414 lat (usec): min=40833, max=42114, avg=41022.15, stdev=192.18 00:42:56.414 clat percentiles (usec): 00:42:56.414 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:56.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:56.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:56.414 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:56.414 | 99.99th=[42206] 00:42:56.414 bw ( KiB/s): min= 384, max= 416, per=49.57%, avg=388.80, stdev=11.72, samples=20 00:42:56.414 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:56.414 lat (msec) : 50=100.00% 00:42:56.414 cpu : usr=96.34%, sys=3.40%, ctx=18, majf=0, minf=104 00:42:56.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:56.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:56.414 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:56.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:56.414 00:42:56.414 Run status group 0 (all jobs): 00:42:56.414 READ: bw=783KiB/s (801kB/s), 390KiB/s-393KiB/s (399kB/s-402kB/s), io=7840KiB (8028kB), run=10014-10017msec 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 00:42:56.414 real 0m11.424s 00:42:56.414 user 0m26.503s 00:42:56.414 sys 0m1.058s 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 ************************************ 00:42:56.414 END TEST fio_dif_1_multi_subsystems 00:42:56.414 ************************************ 00:42:56.414 06:33:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:56.414 06:33:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:56.414 06:33:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 ************************************ 00:42:56.414 START TEST fio_dif_rand_params 00:42:56.414 ************************************ 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 bdev_null0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.414 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:56.415 [2024-12-15 06:33:16.327682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:56.415 { 00:42:56.415 "params": { 00:42:56.415 "name": "Nvme$subsystem", 00:42:56.415 "trtype": "$TEST_TRANSPORT", 00:42:56.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:56.415 "adrfam": "ipv4", 00:42:56.415 "trsvcid": "$NVMF_PORT", 00:42:56.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:56.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:56.415 "hdgst": ${hdgst:-false}, 00:42:56.415 "ddgst": ${ddgst:-false} 00:42:56.415 }, 00:42:56.415 "method": "bdev_nvme_attach_controller" 00:42:56.415 } 00:42:56.415 EOF 00:42:56.415 )") 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:56.415 "params": { 00:42:56.415 "name": "Nvme0", 00:42:56.415 "trtype": "tcp", 00:42:56.415 "traddr": "10.0.0.2", 00:42:56.415 "adrfam": "ipv4", 00:42:56.415 "trsvcid": "4420", 00:42:56.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.415 "hdgst": false, 00:42:56.415 "ddgst": false 00:42:56.415 }, 00:42:56.415 "method": "bdev_nvme_attach_controller" 00:42:56.415 }' 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:56.415 06:33:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:56.674 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:56.674 ... 00:42:56.674 fio-3.35 00:42:56.674 Starting 3 threads 00:43:03.239 00:43:03.239 filename0: (groupid=0, jobs=1): err= 0: pid=1305072: Sun Dec 15 06:33:22 2024 00:43:03.239 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(198MiB/5044msec) 00:43:03.239 slat (nsec): min=6264, max=72052, avg=14457.74, stdev=7188.25 00:43:03.239 clat (usec): min=3581, max=89457, avg=9498.15, stdev=5595.37 00:43:03.239 lat (usec): min=3587, max=89473, avg=9512.61, stdev=5595.43 00:43:03.239 clat percentiles (usec): 00:43:03.239 | 1.00th=[ 3851], 5.00th=[ 6325], 10.00th=[ 7308], 20.00th=[ 8029], 00:43:03.239 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:43:03.239 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:43:03.240 | 99.00th=[49021], 99.50th=[50070], 99.90th=[88605], 99.95th=[89654], 00:43:03.240 | 99.99th=[89654] 00:43:03.240 bw ( KiB/s): min=30720, max=45824, per=34.17%, avg=40524.80, stdev=4789.40, samples=10 00:43:03.240 iops : min= 240, max= 358, avg=316.60, stdev=37.42, samples=10 00:43:03.240 lat (msec) : 4=1.13%, 10=78.50%, 20=19.10%, 50=0.76%, 100=0.50% 00:43:03.240 cpu : usr=94.45%, sys=5.25%, ctx=13, majf=0, minf=78 00:43:03.240 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.240 filename0: (groupid=0, jobs=1): err= 0: pid=1305073: Sun Dec 15 06:33:22 2024 00:43:03.240 read: IOPS=307, BW=38.4MiB/s (40.3MB/s)(194MiB/5043msec) 00:43:03.240 slat (usec): min=6, max=105, avg=12.44, stdev= 4.65 00:43:03.240 clat (usec): min=3551, max=53020, avg=9718.43, stdev=3319.50 00:43:03.240 lat (usec): min=3557, max=53049, avg=9730.87, stdev=3320.11 00:43:03.240 clat percentiles (usec): 00:43:03.240 | 1.00th=[ 4113], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 8455], 00:43:03.240 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:43:03.240 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11863], 00:43:03.240 | 99.00th=[13042], 99.50th=[44827], 99.90th=[51119], 99.95th=[53216], 00:43:03.240 | 99.99th=[53216] 00:43:03.240 bw ( KiB/s): min=35072, max=43776, per=33.42%, avg=39628.80, stdev=2837.38, samples=10 00:43:03.240 iops : min= 274, max= 342, avg=309.60, stdev=22.17, samples=10 00:43:03.240 lat (msec) : 4=0.84%, 10=57.94%, 20=40.71%, 50=0.26%, 100=0.26% 00:43:03.240 cpu : usr=95.50%, sys=4.18%, ctx=7, majf=0, minf=47 00:43:03.240 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.240 filename0: (groupid=0, jobs=1): err= 0: pid=1305074: Sun Dec 15 06:33:22 2024 00:43:03.240 read: IOPS=307, BW=38.4MiB/s (40.3MB/s)(192MiB/5002msec) 00:43:03.240 slat (nsec): min=6350, max=38271, avg=12527.10, stdev=4457.81 00:43:03.240 clat (usec): min=3497, max=50903, avg=9747.78, stdev=4527.27 00:43:03.240 lat (usec): min=3507, max=50929, avg=9760.30, stdev=4527.41 00:43:03.240 clat percentiles (usec): 00:43:03.240 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[ 8291], 00:43:03.240 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:43:03.240 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11076], 95.00th=[11600], 00:43:03.240 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:43:03.240 | 99.99th=[51119] 00:43:03.240 bw ( KiB/s): min=34048, max=45568, per=33.29%, avg=39480.89, stdev=3938.53, samples=9 00:43:03.240 iops : min= 266, max= 356, avg=308.44, stdev=30.77, samples=9 00:43:03.240 lat (msec) : 4=0.07%, 10=70.85%, 20=27.91%, 50=0.72%, 100=0.46% 00:43:03.240 cpu : usr=94.78%, sys=4.90%, ctx=7, majf=0, minf=35 00:43:03.240 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:03.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.240 issued rwts: total=1537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.240 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:03.240 00:43:03.240 Run status group 0 (all jobs): 00:43:03.240 READ: bw=116MiB/s (121MB/s), 38.4MiB/s-39.3MiB/s (40.3MB/s-41.2MB/s), io=584MiB (612MB), run=5002-5044msec 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 bdev_null0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 [2024-12-15 06:33:22.533325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 bdev_null1 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 bdev_null2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.240 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:03.241 { 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme$subsystem", 00:43:03.241 "trtype": "$TEST_TRANSPORT", 00:43:03.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "$NVMF_PORT", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.241 "hdgst": ${hdgst:-false}, 00:43:03.241 "ddgst": ${ddgst:-false} 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 } 00:43:03.241 EOF 00:43:03.241 )") 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:03.241 { 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme$subsystem", 00:43:03.241 "trtype": "$TEST_TRANSPORT", 00:43:03.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "$NVMF_PORT", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.241 "hdgst": ${hdgst:-false}, 00:43:03.241 "ddgst": ${ddgst:-false} 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 } 00:43:03.241 EOF 00:43:03.241 )") 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:03.241 { 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme$subsystem", 00:43:03.241 "trtype": "$TEST_TRANSPORT", 00:43:03.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "$NVMF_PORT", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.241 "hdgst": ${hdgst:-false}, 00:43:03.241 "ddgst": ${ddgst:-false} 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 } 00:43:03.241 EOF 00:43:03.241 )") 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme0", 00:43:03.241 "trtype": "tcp", 00:43:03.241 "traddr": "10.0.0.2", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "4420", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:03.241 "hdgst": false, 00:43:03.241 "ddgst": false 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 },{ 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme1", 00:43:03.241 "trtype": "tcp", 00:43:03.241 "traddr": "10.0.0.2", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "4420", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:03.241 "hdgst": false, 00:43:03.241 "ddgst": false 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 },{ 00:43:03.241 "params": { 00:43:03.241 "name": "Nvme2", 00:43:03.241 "trtype": "tcp", 00:43:03.241 "traddr": "10.0.0.2", 00:43:03.241 "adrfam": "ipv4", 00:43:03.241 "trsvcid": "4420", 00:43:03.241 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:03.241 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:03.241 "hdgst": false, 00:43:03.241 "ddgst": false 00:43:03.241 }, 00:43:03.241 "method": "bdev_nvme_attach_controller" 00:43:03.241 }' 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:03.241 06:33:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.241 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.241 ... 00:43:03.241 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.241 ... 00:43:03.241 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:03.241 ... 00:43:03.241 fio-3.35 00:43:03.241 Starting 24 threads 00:43:15.437 00:43:15.437 filename0: (groupid=0, jobs=1): err= 0: pid=1306185: Sun Dec 15 06:33:34 2024 00:43:15.437 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:43:15.437 slat (usec): min=7, max=120, avg=31.83, stdev=22.92 00:43:15.437 clat (usec): min=8839, max=31375, avg=30113.02, stdev=1771.58 00:43:15.437 lat (usec): min=8848, max=31388, avg=30144.85, stdev=1770.75 00:43:15.437 clat percentiles (usec): 00:43:15.437 | 1.00th=[16319], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:43:15.437 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.437 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:43:15.437 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:43:15.437 | 99.99th=[31327] 00:43:15.437 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.437 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.437 lat (msec) : 10=0.04%, 20=1.14%, 50=98.83% 00:43:15.437 cpu : usr=98.55%, sys=1.05%, ctx=15, majf=0, minf=9 00:43:15.437 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.437 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.437 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.437 filename0: (groupid=0, jobs=1): err= 0: pid=1306186: Sun Dec 15 06:33:34 2024 00:43:15.437 read: IOPS=522, BW=2089KiB/s (2139kB/s)(20.5MiB/10050msec) 00:43:15.437 slat (usec): min=5, max=129, avg=50.57, stdev=24.65 00:43:15.437 clat (usec): min=11604, max=52402, avg=30035.42, stdev=1600.91 00:43:15.437 lat (usec): min=11627, max=52418, avg=30085.99, stdev=1602.78 00:43:15.437 clat percentiles (usec): 00:43:15.437 | 1.00th=[27919], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.437 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:43:15.437 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.437 | 99.00th=[31065], 99.50th=[33162], 99.90th=[47449], 99.95th=[47449], 00:43:15.437 | 99.99th=[52167] 00:43:15.437 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=73.89, samples=20 00:43:15.437 iops : min= 480, max= 544, avg=523.20, stdev=18.47, samples=20 00:43:15.437 lat (msec) : 20=0.57%, 50=99.39%, 100=0.04% 00:43:15.437 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=9 00:43:15.437 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:15.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.437 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.437 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.437 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.437 filename0: (groupid=0, jobs=1): err= 0: pid=1306187: Sun Dec 15 06:33:34 2024 00:43:15.437 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:43:15.437 slat (nsec): min=7799, max=80003, avg=17394.33, stdev=10080.65 00:43:15.437 clat (usec): min=12255, max=53168, avg=30364.20, stdev=1740.79 00:43:15.437 lat (usec): min=12264, max=53184, avg=30381.59, stdev=1739.99 00:43:15.437 clat percentiles (usec): 00:43:15.437 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:15.437 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.437 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.437 | 99.00th=[31327], 99.50th=[31589], 99.90th=[53216], 99.95th=[53216], 00:43:15.437 | 99.99th=[53216] 00:43:15.437 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2092.95, stdev=74.79, samples=20 00:43:15.438 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.438 lat (msec) : 20=0.61%, 50=99.09%, 100=0.30% 00:43:15.438 cpu : usr=98.62%, sys=0.98%, ctx=16, majf=0, minf=9 00:43:15.438 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename0: (groupid=0, jobs=1): err= 0: pid=1306188: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10001msec) 00:43:15.438 slat (usec): min=4, max=119, avg=45.05, stdev=26.65 00:43:15.438 clat (usec): min=19371, max=52384, avg=30173.02, stdev=1550.52 00:43:15.438 lat (usec): min=19390, max=52454, avg=30218.07, stdev=1551.79 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.438 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:15.438 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.438 | 99.00th=[33424], 99.50th=[38011], 99.90th=[51643], 99.95th=[51643], 00:43:15.438 | 99.99th=[52167] 00:43:15.438 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2086.74, stdev=71.02, samples=19 00:43:15.438 iops : min= 480, max= 544, avg=521.68, stdev=17.75, samples=19 00:43:15.438 lat (msec) : 20=0.42%, 50=99.39%, 100=0.19% 00:43:15.438 cpu : usr=98.31%, sys=1.27%, ctx=15, majf=0, minf=9 00:43:15.438 IO depths : 1=5.5%, 2=11.2%, 4=22.9%, 8=53.0%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename0: (groupid=0, jobs=1): err= 0: pid=1306189: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10010msec) 00:43:15.438 slat (nsec): min=7534, max=96355, avg=14595.51, stdev=6368.81 00:43:15.438 clat (usec): min=1295, max=50346, avg=28036.80, stdev=6173.01 00:43:15.438 lat (usec): min=1309, max=50380, avg=28051.40, stdev=6174.59 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[ 1516], 5.00th=[13698], 10.00th=[19792], 20.00th=[29754], 00:43:15.438 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.438 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:15.438 | 99.00th=[33817], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:43:15.438 | 99.99th=[50594] 00:43:15.438 bw ( KiB/s): min= 2048, max= 3449, per=4.50%, avg=2268.85, stdev=337.57, samples=20 00:43:15.438 iops : min= 512, max= 862, avg=567.20, stdev=84.35, samples=20 00:43:15.438 lat (msec) : 2=2.25%, 10=1.69%, 20=8.02%, 50=88.01%, 100=0.04% 00:43:15.438 cpu : usr=98.39%, sys=1.21%, ctx=16, majf=0, minf=9 00:43:15.438 IO depths : 1=4.6%, 2=9.7%, 4=21.0%, 8=56.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename0: (groupid=0, jobs=1): err= 0: pid=1306190: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:43:15.438 slat (usec): min=7, max=109, avg=20.57, stdev=14.16 00:43:15.438 clat (usec): min=9364, max=38012, avg=30147.26, stdev=1841.79 00:43:15.438 lat (usec): min=9372, max=38020, avg=30167.83, stdev=1840.94 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[16319], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:15.438 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.438 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.438 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:43:15.438 | 99.99th=[38011] 00:43:15.438 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.438 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.438 lat (msec) : 10=0.04%, 20=1.14%, 50=98.83% 00:43:15.438 cpu : usr=98.38%, sys=1.22%, ctx=13, majf=0, minf=9 00:43:15.438 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename0: (groupid=0, jobs=1): err= 0: pid=1306191: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:43:15.438 slat (usec): min=7, max=128, avg=20.53, stdev=14.69 00:43:15.438 clat (usec): min=10916, max=31556, avg=30188.64, stdev=1771.07 00:43:15.438 lat (usec): min=10939, max=31571, avg=30209.18, stdev=1769.34 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[16450], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:43:15.438 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:43:15.438 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.438 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:43:15.438 | 99.99th=[31589] 00:43:15.438 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.438 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.438 lat (msec) : 20=1.21%, 50=98.79% 00:43:15.438 cpu : usr=98.56%, sys=1.03%, ctx=15, majf=0, minf=9 00:43:15.438 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename0: (groupid=0, jobs=1): err= 0: pid=1306192: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10012msec) 00:43:15.438 slat (usec): min=6, max=135, avg=47.51, stdev=26.24 00:43:15.438 clat (usec): min=12648, max=52710, avg=30036.89, stdev=1611.10 00:43:15.438 lat (usec): min=12657, max=52723, avg=30084.41, stdev=1613.35 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[26084], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.438 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:15.438 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.438 | 99.00th=[31065], 99.50th=[33424], 99.90th=[52691], 99.95th=[52691], 00:43:15.438 | 99.99th=[52691] 00:43:15.438 bw ( KiB/s): min= 1968, max= 2176, per=4.15%, avg=2090.95, stdev=69.14, samples=19 00:43:15.438 iops : min= 492, max= 544, avg=522.74, stdev=17.28, samples=19 00:43:15.438 lat (msec) : 20=0.95%, 50=98.86%, 100=0.19% 00:43:15.438 cpu : usr=98.65%, sys=0.95%, ctx=15, majf=0, minf=9 00:43:15.438 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename1: (groupid=0, jobs=1): err= 0: pid=1306193: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:43:15.438 slat (usec): min=7, max=124, avg=46.95, stdev=26.00 00:43:15.438 clat (usec): min=11110, max=31456, avg=29991.38, stdev=1758.48 00:43:15.438 lat (usec): min=11128, max=31474, avg=30038.33, stdev=1759.36 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[16319], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.438 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:15.438 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:15.438 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:43:15.438 | 99.99th=[31327] 00:43:15.438 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.438 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.438 lat (msec) : 20=1.21%, 50=98.79% 00:43:15.438 cpu : usr=98.26%, sys=1.34%, ctx=15, majf=0, minf=9 00:43:15.438 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename1: (groupid=0, jobs=1): err= 0: pid=1306194: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=525, BW=2103KiB/s (2154kB/s)(20.6MiB/10012msec) 00:43:15.438 slat (nsec): min=4963, max=48599, avg=20032.70, stdev=6125.62 00:43:15.438 clat (usec): min=17844, max=31721, avg=30251.06, stdev=1126.20 00:43:15.438 lat (usec): min=17859, max=31738, avg=30271.09, stdev=1126.69 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[27132], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:15.438 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.438 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.438 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:43:15.438 | 99.99th=[31851] 00:43:15.438 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2095.16, stdev=63.44, samples=19 00:43:15.438 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:43:15.438 lat (msec) : 20=0.91%, 50=99.09% 00:43:15.438 cpu : usr=98.60%, sys=1.01%, ctx=13, majf=0, minf=9 00:43:15.438 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.438 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.438 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.438 filename1: (groupid=0, jobs=1): err= 0: pid=1306195: Sun Dec 15 06:33:34 2024 00:43:15.438 read: IOPS=523, BW=2094KiB/s (2144kB/s)(20.5MiB/10005msec) 00:43:15.438 slat (usec): min=4, max=126, avg=48.62, stdev=24.68 00:43:15.438 clat (usec): min=19380, max=54798, avg=30085.64, stdev=1438.95 00:43:15.438 lat (usec): min=19397, max=54812, avg=30134.26, stdev=1439.77 00:43:15.438 clat percentiles (usec): 00:43:15.438 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.439 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:15.439 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.439 | 99.00th=[31065], 99.50th=[38011], 99.90th=[47973], 99.95th=[47973], 00:43:15.439 | 99.99th=[54789] 00:43:15.439 bw ( KiB/s): min= 1968, max= 2176, per=4.15%, avg=2090.95, stdev=69.14, samples=19 00:43:15.439 iops : min= 492, max= 544, avg=522.74, stdev=17.28, samples=19 00:43:15.439 lat (msec) : 20=0.42%, 50=99.54%, 100=0.04% 00:43:15.439 cpu : usr=98.55%, sys=1.05%, ctx=14, majf=0, minf=9 00:43:15.439 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename1: (groupid=0, jobs=1): err= 0: pid=1306197: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10001msec) 00:43:15.439 slat (nsec): min=6270, max=86871, avg=37007.36, stdev=12484.05 00:43:15.439 clat (usec): min=14456, max=31319, avg=30176.17, stdev=898.78 00:43:15.439 lat (usec): min=14465, max=31342, avg=30213.18, stdev=899.72 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:43:15.439 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:15.439 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:15.439 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:43:15.439 | 99.99th=[31327] 00:43:15.439 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2095.16, stdev=63.44, samples=19 00:43:15.439 iops : min= 512, max= 544, avg=523.79, stdev=15.86, samples=19 00:43:15.439 lat (msec) : 20=0.30%, 50=99.70% 00:43:15.439 cpu : usr=98.10%, sys=1.27%, ctx=88, majf=0, minf=9 00:43:15.439 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename1: (groupid=0, jobs=1): err= 0: pid=1306198: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.439 slat (usec): min=10, max=123, avg=49.10, stdev=24.51 00:43:15.439 clat (usec): min=12989, max=47076, avg=30025.54, stdev=1477.63 00:43:15.439 lat (usec): min=13007, max=47092, avg=30074.64, stdev=1479.91 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.439 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:43:15.439 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:15.439 | 99.00th=[31065], 99.50th=[31065], 99.90th=[46924], 99.95th=[46924], 00:43:15.439 | 99.99th=[46924] 00:43:15.439 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:43:15.439 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.439 lat (msec) : 20=0.61%, 50=99.39% 00:43:15.439 cpu : usr=98.54%, sys=1.07%, ctx=14, majf=0, minf=9 00:43:15.439 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename1: (groupid=0, jobs=1): err= 0: pid=1306199: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.439 slat (usec): min=8, max=124, avg=50.52, stdev=24.20 00:43:15.439 clat (usec): min=13133, max=47012, avg=30040.64, stdev=1516.07 00:43:15.439 lat (usec): min=13149, max=47029, avg=30091.15, stdev=1517.84 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[27395], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.439 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:15.439 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.439 | 99.00th=[31327], 99.50th=[33162], 99.90th=[46924], 99.95th=[46924], 00:43:15.439 | 99.99th=[46924] 00:43:15.439 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:43:15.439 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.439 lat (msec) : 20=0.61%, 50=99.39% 00:43:15.439 cpu : usr=98.37%, sys=1.23%, ctx=14, majf=0, minf=9 00:43:15.439 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename1: (groupid=0, jobs=1): err= 0: pid=1306200: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10009msec) 00:43:15.439 slat (nsec): min=7581, max=58606, avg=13564.54, stdev=5707.71 00:43:15.439 clat (usec): min=11206, max=37418, avg=30206.02, stdev=1902.53 00:43:15.439 lat (usec): min=11254, max=37428, avg=30219.58, stdev=1901.73 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[16319], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:43:15.439 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.439 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.439 | 99.00th=[31327], 99.50th=[31327], 99.90th=[37487], 99.95th=[37487], 00:43:15.439 | 99.99th=[37487] 00:43:15.439 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.439 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.439 lat (msec) : 20=1.21%, 50=98.79% 00:43:15.439 cpu : usr=98.30%, sys=1.14%, ctx=96, majf=0, minf=9 00:43:15.439 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename1: (groupid=0, jobs=1): err= 0: pid=1306201: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:43:15.439 slat (usec): min=7, max=125, avg=25.00, stdev=16.32 00:43:15.439 clat (usec): min=10874, max=31436, avg=30164.29, stdev=1771.21 00:43:15.439 lat (usec): min=10887, max=31451, avg=30189.29, stdev=1769.88 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[16450], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:43:15.439 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.439 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:43:15.439 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:43:15.439 | 99.99th=[31327] 00:43:15.439 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.439 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.439 lat (msec) : 20=1.21%, 50=98.79% 00:43:15.439 cpu : usr=98.70%, sys=0.90%, ctx=16, majf=0, minf=9 00:43:15.439 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename2: (groupid=0, jobs=1): err= 0: pid=1306202: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.439 slat (usec): min=6, max=117, avg=47.81, stdev=25.18 00:43:15.439 clat (usec): min=13049, max=47225, avg=30025.55, stdev=1482.38 00:43:15.439 lat (usec): min=13062, max=47242, avg=30073.36, stdev=1484.82 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.439 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:43:15.439 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:15.439 | 99.00th=[31065], 99.50th=[31065], 99.90th=[47449], 99.95th=[47449], 00:43:15.439 | 99.99th=[47449] 00:43:15.439 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:43:15.439 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.439 lat (msec) : 20=0.61%, 50=99.39% 00:43:15.439 cpu : usr=98.40%, sys=1.19%, ctx=14, majf=0, minf=9 00:43:15.439 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename2: (groupid=0, jobs=1): err= 0: pid=1306203: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10017msec) 00:43:15.439 slat (usec): min=7, max=121, avg=46.94, stdev=26.14 00:43:15.439 clat (usec): min=10385, max=31445, avg=29997.17, stdev=1729.30 00:43:15.439 lat (usec): min=10393, max=31460, avg=30044.11, stdev=1730.36 00:43:15.439 clat percentiles (usec): 00:43:15.439 | 1.00th=[16581], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.439 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:15.439 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:15.439 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:43:15.439 | 99.99th=[31327] 00:43:15.439 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2105.60, stdev=77.42, samples=20 00:43:15.439 iops : min= 512, max= 576, avg=526.40, stdev=19.35, samples=20 00:43:15.439 lat (msec) : 20=1.17%, 50=98.83% 00:43:15.439 cpu : usr=98.49%, sys=1.10%, ctx=13, majf=0, minf=9 00:43:15.439 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.439 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.439 filename2: (groupid=0, jobs=1): err= 0: pid=1306204: Sun Dec 15 06:33:34 2024 00:43:15.439 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.439 slat (nsec): min=7608, max=45708, avg=19158.39, stdev=6250.18 00:43:15.439 clat (usec): min=12061, max=52711, avg=30338.92, stdev=1780.17 00:43:15.440 lat (usec): min=12072, max=52729, avg=30358.08, stdev=1780.01 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:15.440 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.440 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.440 | 99.00th=[31327], 99.50th=[31589], 99.90th=[52691], 99.95th=[52691], 00:43:15.440 | 99.99th=[52691] 00:43:15.440 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2092.95, stdev=74.79, samples=20 00:43:15.440 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.440 lat (msec) : 20=0.65%, 50=99.05%, 100=0.30% 00:43:15.440 cpu : usr=98.56%, sys=1.06%, ctx=14, majf=0, minf=9 00:43:15.440 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 filename2: (groupid=0, jobs=1): err= 0: pid=1306205: Sun Dec 15 06:33:34 2024 00:43:15.440 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.440 slat (usec): min=5, max=125, avg=37.91, stdev=22.78 00:43:15.440 clat (usec): min=12676, max=46950, avg=30153.47, stdev=1498.74 00:43:15.440 lat (usec): min=12693, max=46968, avg=30191.38, stdev=1499.36 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:43:15.440 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:15.440 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:15.440 | 99.00th=[31327], 99.50th=[31589], 99.90th=[46924], 99.95th=[46924], 00:43:15.440 | 99.99th=[46924] 00:43:15.440 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2092.80, stdev=75.15, samples=20 00:43:15.440 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.440 lat (msec) : 20=0.61%, 50=99.39% 00:43:15.440 cpu : usr=98.32%, sys=1.28%, ctx=16, majf=0, minf=9 00:43:15.440 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 filename2: (groupid=0, jobs=1): err= 0: pid=1306206: Sun Dec 15 06:33:34 2024 00:43:15.440 read: IOPS=524, BW=2097KiB/s (2147kB/s)(20.5MiB/10010msec) 00:43:15.440 slat (usec): min=6, max=121, avg=47.97, stdev=24.92 00:43:15.440 clat (usec): min=13066, max=46647, avg=30023.93, stdev=1461.44 00:43:15.440 lat (usec): min=13081, max=46665, avg=30071.90, stdev=1463.95 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.440 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:43:15.440 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30540], 00:43:15.440 | 99.00th=[31065], 99.50th=[31065], 99.90th=[46400], 99.95th=[46400], 00:43:15.440 | 99.99th=[46400] 00:43:15.440 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2092.95, stdev=74.79, samples=20 00:43:15.440 iops : min= 480, max= 544, avg=523.20, stdev=18.79, samples=20 00:43:15.440 lat (msec) : 20=0.61%, 50=99.39% 00:43:15.440 cpu : usr=98.57%, sys=1.00%, ctx=19, majf=0, minf=9 00:43:15.440 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 filename2: (groupid=0, jobs=1): err= 0: pid=1306208: Sun Dec 15 06:33:34 2024 00:43:15.440 read: IOPS=525, BW=2102KiB/s (2152kB/s)(20.6MiB/10018msec) 00:43:15.440 slat (nsec): min=6343, max=50802, avg=19048.38, stdev=6667.63 00:43:15.440 clat (usec): min=8422, max=35158, avg=30284.71, stdev=1059.76 00:43:15.440 lat (usec): min=8431, max=35175, avg=30303.76, stdev=1059.60 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[27132], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:43:15.440 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:15.440 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:15.440 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31851], 00:43:15.440 | 99.99th=[35390] 00:43:15.440 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2099.20, stdev=64.34, samples=20 00:43:15.440 iops : min= 512, max= 544, avg=524.80, stdev=16.08, samples=20 00:43:15.440 lat (msec) : 10=0.04%, 20=0.57%, 50=99.39% 00:43:15.440 cpu : usr=98.16%, sys=1.44%, ctx=14, majf=0, minf=9 00:43:15.440 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 filename2: (groupid=0, jobs=1): err= 0: pid=1306209: Sun Dec 15 06:33:34 2024 00:43:15.440 read: IOPS=522, BW=2091KiB/s (2141kB/s)(20.4MiB/10010msec) 00:43:15.440 slat (usec): min=7, max=119, avg=47.55, stdev=26.80 00:43:15.440 clat (usec): min=12947, max=70377, avg=30128.19, stdev=2568.25 00:43:15.440 lat (usec): min=12956, max=70391, avg=30175.74, stdev=2568.21 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:43:15.440 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:15.440 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:15.440 | 99.00th=[31327], 99.50th=[37487], 99.90th=[70779], 99.95th=[70779], 00:43:15.440 | 99.99th=[70779] 00:43:15.440 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2086.55, stdev=72.76, samples=20 00:43:15.440 iops : min= 480, max= 544, avg=521.60, stdev=18.28, samples=20 00:43:15.440 lat (msec) : 20=0.61%, 50=99.08%, 100=0.31% 00:43:15.440 cpu : usr=98.60%, sys=1.01%, ctx=15, majf=0, minf=9 00:43:15.440 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 filename2: (groupid=0, jobs=1): err= 0: pid=1306210: Sun Dec 15 06:33:34 2024 00:43:15.440 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10009msec) 00:43:15.440 slat (usec): min=7, max=117, avg=18.53, stdev=18.06 00:43:15.440 clat (usec): min=12977, max=68953, avg=30394.36, stdev=3491.97 00:43:15.440 lat (usec): min=12988, max=68998, avg=30412.88, stdev=3490.32 00:43:15.440 clat percentiles (usec): 00:43:15.440 | 1.00th=[20055], 5.00th=[25822], 10.00th=[29492], 20.00th=[30278], 00:43:15.440 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:43:15.440 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[34866], 00:43:15.440 | 99.00th=[40109], 99.50th=[41157], 99.90th=[68682], 99.95th=[68682], 00:43:15.440 | 99.99th=[68682] 00:43:15.440 bw ( KiB/s): min= 1920, max= 2192, per=4.15%, avg=2092.80, stdev=70.72, samples=20 00:43:15.440 iops : min= 480, max= 548, avg=523.20, stdev=17.68, samples=20 00:43:15.440 lat (msec) : 20=1.03%, 50=98.67%, 100=0.30% 00:43:15.440 cpu : usr=98.50%, sys=1.09%, ctx=14, majf=0, minf=9 00:43:15.440 IO depths : 1=3.4%, 2=7.0%, 4=15.2%, 8=63.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:43:15.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 complete : 0=0.0%, 4=91.8%, 8=4.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:15.440 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:15.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:15.440 00:43:15.440 Run status group 0 (all jobs): 00:43:15.440 READ: bw=49.2MiB/s (51.6MB/s), 2089KiB/s-2273KiB/s (2139kB/s-2328kB/s), io=495MiB (519MB), run=10001-10050msec 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.440 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 bdev_null0 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 [2024-12-15 06:33:34.422004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 bdev_null1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:15.441 { 00:43:15.441 "params": { 00:43:15.441 "name": "Nvme$subsystem", 00:43:15.441 "trtype": "$TEST_TRANSPORT", 00:43:15.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:15.441 "adrfam": "ipv4", 00:43:15.441 "trsvcid": "$NVMF_PORT", 00:43:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:15.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:15.441 "hdgst": ${hdgst:-false}, 00:43:15.441 "ddgst": ${ddgst:-false} 00:43:15.441 }, 00:43:15.441 "method": "bdev_nvme_attach_controller" 00:43:15.441 } 00:43:15.441 EOF 00:43:15.441 )") 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:15.441 { 00:43:15.441 "params": { 00:43:15.441 "name": "Nvme$subsystem", 00:43:15.441 "trtype": "$TEST_TRANSPORT", 00:43:15.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:15.441 "adrfam": "ipv4", 00:43:15.441 "trsvcid": "$NVMF_PORT", 00:43:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:15.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:15.441 "hdgst": ${hdgst:-false}, 00:43:15.441 "ddgst": ${ddgst:-false} 00:43:15.441 }, 00:43:15.441 "method": "bdev_nvme_attach_controller" 00:43:15.441 } 00:43:15.441 EOF 00:43:15.441 )") 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:15.441 "params": { 00:43:15.441 "name": "Nvme0", 00:43:15.441 "trtype": "tcp", 00:43:15.441 "traddr": "10.0.0.2", 00:43:15.441 "adrfam": "ipv4", 00:43:15.441 "trsvcid": "4420", 00:43:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:15.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:15.441 "hdgst": false, 00:43:15.441 "ddgst": false 00:43:15.441 }, 00:43:15.441 "method": "bdev_nvme_attach_controller" 00:43:15.441 },{ 00:43:15.441 "params": { 00:43:15.441 "name": "Nvme1", 00:43:15.441 "trtype": "tcp", 00:43:15.441 "traddr": "10.0.0.2", 00:43:15.441 "adrfam": "ipv4", 00:43:15.441 "trsvcid": "4420", 00:43:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:15.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:15.441 "hdgst": false, 00:43:15.441 "ddgst": false 00:43:15.441 }, 00:43:15.441 "method": "bdev_nvme_attach_controller" 00:43:15.441 }' 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:15.441 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:15.442 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:15.442 06:33:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:15.442 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:15.442 ... 00:43:15.442 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:15.442 ... 00:43:15.442 fio-3.35 00:43:15.442 Starting 4 threads 00:43:20.713 00:43:20.713 filename0: (groupid=0, jobs=1): err= 0: pid=1308038: Sun Dec 15 06:33:40 2024 00:43:20.713 read: IOPS=2763, BW=21.6MiB/s (22.6MB/s)(108MiB/5001msec) 00:43:20.713 slat (nsec): min=6190, max=46552, avg=9823.08, stdev=3804.23 00:43:20.713 clat (usec): min=805, max=5625, avg=2865.26, stdev=452.34 00:43:20.713 lat (usec): min=818, max=5637, avg=2875.09, stdev=452.57 00:43:20.713 clat percentiles (usec): 00:43:20.713 | 1.00th=[ 1762], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2507], 00:43:20.713 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:43:20.713 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3556], 00:43:20.713 | 99.00th=[ 4113], 99.50th=[ 4490], 99.90th=[ 5014], 99.95th=[ 5342], 00:43:20.713 | 99.99th=[ 5604] 00:43:20.713 bw ( KiB/s): min=20912, max=23184, per=26.38%, avg=22128.00, stdev=707.94, samples=9 00:43:20.713 iops : min= 2614, max= 2898, avg=2766.00, stdev=88.49, samples=9 00:43:20.713 lat (usec) : 1000=0.04% 00:43:20.713 lat (msec) : 2=2.12%, 4=96.44%, 10=1.40% 00:43:20.713 cpu : usr=96.34%, sys=3.28%, ctx=10, majf=0, minf=0 00:43:20.713 IO depths : 1=0.3%, 2=8.5%, 4=61.3%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:20.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 issued rwts: total=13821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:20.713 filename0: (groupid=0, jobs=1): err= 0: pid=1308040: Sun Dec 15 06:33:40 2024 00:43:20.713 read: IOPS=2484, BW=19.4MiB/s (20.4MB/s)(97.1MiB/5001msec) 00:43:20.713 slat (nsec): min=6173, max=40485, avg=9950.67, stdev=3804.96 00:43:20.713 clat (usec): min=698, max=6331, avg=3188.99, stdev=539.40 00:43:20.713 lat (usec): min=711, max=6338, avg=3198.94, stdev=539.19 00:43:20.713 clat percentiles (usec): 00:43:20.713 | 1.00th=[ 2008], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2835], 00:43:20.713 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3195], 00:43:20.713 | 70.00th=[ 3326], 80.00th=[ 3490], 90.00th=[ 3818], 95.00th=[ 4228], 00:43:20.713 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 6063], 00:43:20.713 | 99.99th=[ 6128] 00:43:20.713 bw ( KiB/s): min=19152, max=21456, per=23.79%, avg=19952.00, stdev=739.12, samples=9 00:43:20.713 iops : min= 2394, max= 2682, avg=2494.00, stdev=92.39, samples=9 00:43:20.713 lat (usec) : 750=0.02%, 1000=0.08% 00:43:20.713 lat (msec) : 2=0.85%, 4=91.83%, 10=7.22% 00:43:20.713 cpu : usr=96.50%, sys=3.18%, ctx=9, majf=0, minf=9 00:43:20.713 IO depths : 1=0.2%, 2=5.1%, 4=67.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:20.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 issued rwts: total=12427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:20.713 filename1: (groupid=0, jobs=1): err= 0: pid=1308041: Sun Dec 15 06:33:40 2024 00:43:20.713 read: IOPS=2546, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5001msec) 00:43:20.713 slat (nsec): min=6186, max=52772, avg=11330.67, stdev=5308.53 00:43:20.713 clat (usec): min=632, max=6067, avg=3105.70, stdev=545.65 00:43:20.713 lat (usec): min=644, max=6073, avg=3117.03, stdev=545.45 00:43:20.713 clat percentiles (usec): 00:43:20.713 | 1.00th=[ 1893], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:43:20.713 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3163], 00:43:20.713 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3720], 95.00th=[ 4113], 00:43:20.713 | 99.00th=[ 4948], 99.50th=[ 5342], 99.90th=[ 5669], 99.95th=[ 5800], 00:43:20.713 | 99.99th=[ 5997] 00:43:20.713 bw ( KiB/s): min=19312, max=21760, per=24.31%, avg=20394.67, stdev=861.70, samples=9 00:43:20.713 iops : min= 2414, max= 2720, avg=2549.33, stdev=107.71, samples=9 00:43:20.713 lat (usec) : 750=0.04%, 1000=0.05% 00:43:20.713 lat (msec) : 2=1.42%, 4=92.48%, 10=6.01% 00:43:20.713 cpu : usr=93.16%, sys=5.04%, ctx=240, majf=0, minf=9 00:43:20.713 IO depths : 1=0.4%, 2=5.8%, 4=66.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:20.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 issued rwts: total=12737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:20.713 filename1: (groupid=0, jobs=1): err= 0: pid=1308042: Sun Dec 15 06:33:40 2024 00:43:20.713 read: IOPS=2689, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:43:20.713 slat (nsec): min=6179, max=36395, avg=9892.41, stdev=3725.00 00:43:20.713 clat (usec): min=821, max=5711, avg=2943.90, stdev=468.22 00:43:20.713 lat (usec): min=832, max=5718, avg=2953.80, stdev=468.49 00:43:20.713 clat percentiles (usec): 00:43:20.713 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:43:20.713 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:43:20.713 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3720], 00:43:20.713 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5342], 00:43:20.713 | 99.99th=[ 5669] 00:43:20.713 bw ( KiB/s): min=20512, max=22768, per=25.63%, avg=21502.22, stdev=727.80, samples=9 00:43:20.713 iops : min= 2564, max= 2846, avg=2687.78, stdev=90.97, samples=9 00:43:20.713 lat (usec) : 1000=0.01% 00:43:20.713 lat (msec) : 2=1.70%, 4=95.73%, 10=2.55% 00:43:20.713 cpu : usr=96.74%, sys=2.90%, ctx=8, majf=0, minf=0 00:43:20.713 IO depths : 1=0.2%, 2=9.8%, 4=61.0%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:20.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:20.713 issued rwts: total=13450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:20.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:20.713 00:43:20.713 Run status group 0 (all jobs): 00:43:20.713 READ: bw=81.9MiB/s (85.9MB/s), 19.4MiB/s-21.6MiB/s (20.4MB/s-22.6MB/s), io=410MiB (430MB), run=5001-5001msec 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.713 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 00:43:20.714 real 0m24.440s 00:43:20.714 user 4m52.805s 00:43:20.714 sys 0m5.185s 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 ************************************ 00:43:20.714 END TEST fio_dif_rand_params 00:43:20.714 ************************************ 00:43:20.714 06:33:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:20.714 06:33:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:20.714 06:33:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 ************************************ 00:43:20.714 START TEST fio_dif_digest 00:43:20.714 ************************************ 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 bdev_null0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:20.714 [2024-12-15 06:33:40.840028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:20.714 { 00:43:20.714 "params": { 00:43:20.714 "name": "Nvme$subsystem", 00:43:20.714 "trtype": "$TEST_TRANSPORT", 00:43:20.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:20.714 "adrfam": "ipv4", 00:43:20.714 "trsvcid": "$NVMF_PORT", 00:43:20.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:20.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:20.714 "hdgst": ${hdgst:-false}, 00:43:20.714 "ddgst": ${ddgst:-false} 00:43:20.714 }, 00:43:20.714 "method": "bdev_nvme_attach_controller" 00:43:20.714 } 00:43:20.714 EOF 00:43:20.714 )") 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:20.714 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:20.973 "params": { 00:43:20.973 "name": "Nvme0", 00:43:20.973 "trtype": "tcp", 00:43:20.973 "traddr": "10.0.0.2", 00:43:20.973 "adrfam": "ipv4", 00:43:20.973 "trsvcid": "4420", 00:43:20.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:20.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:20.973 "hdgst": true, 00:43:20.973 "ddgst": true 00:43:20.973 }, 00:43:20.973 "method": "bdev_nvme_attach_controller" 00:43:20.973 }' 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:20.973 06:33:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.232 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:21.232 ... 00:43:21.232 fio-3.35 00:43:21.232 Starting 3 threads 00:43:33.601 00:43:33.601 filename0: (groupid=0, jobs=1): err= 0: pid=1309265: Sun Dec 15 06:33:51 2024 00:43:33.601 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10045msec) 00:43:33.601 slat (nsec): min=6972, max=58970, avg=25155.24, stdev=7493.71 00:43:33.601 clat (usec): min=7721, max=50866, avg=10125.55, stdev=1226.77 00:43:33.601 lat (usec): min=7747, max=50890, avg=10150.71, stdev=1226.65 00:43:33.601 clat percentiles (usec): 00:43:33.601 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:43:33.601 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:43:33.601 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:43:33.601 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12387], 99.95th=[46924], 00:43:33.601 | 99.99th=[51119] 00:43:33.601 bw ( KiB/s): min=36864, max=38656, per=35.94%, avg=37913.60, stdev=469.11, samples=20 00:43:33.601 iops : min= 288, max= 302, avg=296.20, stdev= 3.66, samples=20 00:43:33.601 lat (msec) : 10=43.25%, 20=56.68%, 50=0.03%, 100=0.03% 00:43:33.601 cpu : usr=96.92%, sys=2.74%, ctx=19, majf=0, minf=2 00:43:33.601 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 issued rwts: total=2964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.601 filename0: (groupid=0, jobs=1): err= 0: pid=1309266: Sun Dec 15 06:33:51 2024 00:43:33.601 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(336MiB/10045msec) 00:43:33.601 slat (nsec): min=6681, max=45221, avg=18128.73, stdev=6554.67 00:43:33.601 clat (usec): min=8781, max=48763, avg=11165.87, stdev=1247.94 00:43:33.601 lat (usec): min=8804, max=48787, avg=11184.00, stdev=1248.00 00:43:33.601 clat percentiles (usec): 00:43:33.601 | 1.00th=[ 9503], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:43:33.601 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:43:33.601 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:43:33.601 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14222], 99.95th=[47973], 00:43:33.601 | 99.99th=[49021] 00:43:33.601 bw ( KiB/s): min=33536, max=35328, per=32.62%, avg=34406.40, stdev=410.27, samples=20 00:43:33.601 iops : min= 262, max= 276, avg=268.80, stdev= 3.21, samples=20 00:43:33.601 lat (msec) : 10=5.80%, 20=94.13%, 50=0.07% 00:43:33.601 cpu : usr=96.55%, sys=3.13%, ctx=17, majf=0, minf=0 00:43:33.601 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 issued rwts: total=2690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.601 filename0: (groupid=0, jobs=1): err= 0: pid=1309267: Sun Dec 15 06:33:51 2024 00:43:33.601 read: IOPS=261, BW=32.7MiB/s (34.2MB/s)(328MiB/10045msec) 00:43:33.601 slat (nsec): min=6321, max=49848, avg=18276.93, stdev=6372.10 00:43:33.601 clat (usec): min=7336, max=45955, avg=11448.13, stdev=1185.81 00:43:33.601 lat (usec): min=7348, max=45969, avg=11466.41, stdev=1185.88 00:43:33.601 clat percentiles (usec): 00:43:33.601 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:43:33.601 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:43:33.601 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:43:33.601 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14222], 99.95th=[44303], 00:43:33.601 | 99.99th=[45876] 00:43:33.601 bw ( KiB/s): min=32768, max=34560, per=31.80%, avg=33548.80, stdev=480.55, samples=20 00:43:33.601 iops : min= 256, max= 270, avg=262.10, stdev= 3.75, samples=20 00:43:33.601 lat (msec) : 10=2.59%, 20=97.33%, 50=0.08% 00:43:33.601 cpu : usr=96.12%, sys=3.54%, ctx=16, majf=0, minf=11 00:43:33.601 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.601 issued rwts: total=2624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.601 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:33.601 00:43:33.601 Run status group 0 (all jobs): 00:43:33.601 READ: bw=103MiB/s (108MB/s), 32.7MiB/s-36.9MiB/s (34.2MB/s-38.7MB/s), io=1035MiB (1085MB), run=10045-10045msec 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.601 00:43:33.601 real 0m11.350s 00:43:33.601 user 0m36.072s 00:43:33.601 sys 0m1.330s 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.601 06:33:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:33.601 ************************************ 00:43:33.601 END TEST fio_dif_digest 00:43:33.601 ************************************ 00:43:33.601 06:33:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:33.601 06:33:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:33.601 rmmod nvme_tcp 00:43:33.601 rmmod nvme_fabrics 00:43:33.601 rmmod nvme_keyring 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1300398 ']' 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1300398 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1300398 ']' 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1300398 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1300398 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1300398' 00:43:33.601 killing process with pid 1300398 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1300398 00:43:33.601 06:33:52 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1300398 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:33.601 06:33:52 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:34.978 Waiting for block devices as requested 00:43:35.236 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:35.236 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:35.236 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:35.494 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:35.494 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:35.494 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:35.752 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:35.752 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:35.752 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:35.753 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:36.011 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:36.011 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:36.011 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:36.269 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:36.269 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:36.269 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:36.527 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:36.527 06:33:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:36.527 06:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:36.527 06:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:39.059 06:33:58 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:39.059 00:43:39.059 real 1m14.165s 00:43:39.059 user 7m10.633s 00:43:39.059 sys 0m20.436s 00:43:39.059 06:33:58 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:39.059 06:33:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:39.060 ************************************ 00:43:39.060 END TEST nvmf_dif 00:43:39.060 ************************************ 00:43:39.060 06:33:58 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:39.060 06:33:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:39.060 06:33:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:39.060 06:33:58 -- common/autotest_common.sh@10 -- # set +x 00:43:39.060 ************************************ 00:43:39.060 START TEST nvmf_abort_qd_sizes 00:43:39.060 ************************************ 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:39.060 * Looking for test storage... 00:43:39.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.060 --rc genhtml_branch_coverage=1 00:43:39.060 --rc genhtml_function_coverage=1 00:43:39.060 --rc genhtml_legend=1 00:43:39.060 --rc geninfo_all_blocks=1 00:43:39.060 --rc geninfo_unexecuted_blocks=1 00:43:39.060 00:43:39.060 ' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.060 --rc genhtml_branch_coverage=1 00:43:39.060 --rc genhtml_function_coverage=1 00:43:39.060 --rc genhtml_legend=1 00:43:39.060 --rc geninfo_all_blocks=1 00:43:39.060 --rc geninfo_unexecuted_blocks=1 00:43:39.060 00:43:39.060 ' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.060 --rc genhtml_branch_coverage=1 00:43:39.060 --rc genhtml_function_coverage=1 00:43:39.060 --rc genhtml_legend=1 00:43:39.060 --rc geninfo_all_blocks=1 00:43:39.060 --rc geninfo_unexecuted_blocks=1 00:43:39.060 00:43:39.060 ' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:39.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:39.060 --rc genhtml_branch_coverage=1 00:43:39.060 --rc genhtml_function_coverage=1 00:43:39.060 --rc genhtml_legend=1 00:43:39.060 --rc geninfo_all_blocks=1 00:43:39.060 --rc geninfo_unexecuted_blocks=1 00:43:39.060 00:43:39.060 ' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:39.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:39.060 06:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:44.332 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:44.333 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:44.333 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:44.333 Found net devices under 0000:af:00.0: cvl_0_0 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:44.333 Found net devices under 0000:af:00.1: cvl_0_1 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:44.333 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:44.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:44.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:43:44.592 00:43:44.592 --- 10.0.0.2 ping statistics --- 00:43:44.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.592 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:44.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:44.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:43:44.592 00:43:44.592 --- 10.0.0.1 ping statistics --- 00:43:44.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.592 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:44.592 06:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:47.880 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:47.880 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:48.448 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1316911 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1316911 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1316911 ']' 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:48.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:48.448 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:48.448 [2024-12-15 06:34:08.525459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:48.448 [2024-12-15 06:34:08.525511] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:48.708 [2024-12-15 06:34:08.604086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:48.708 [2024-12-15 06:34:08.628436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:48.708 [2024-12-15 06:34:08.628478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:48.708 [2024-12-15 06:34:08.628485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:48.708 [2024-12-15 06:34:08.628492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:48.708 [2024-12-15 06:34:08.628498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:48.708 [2024-12-15 06:34:08.629821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:48.708 [2024-12-15 06:34:08.629930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:48.708 [2024-12-15 06:34:08.630040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.708 [2024-12-15 06:34:08.630041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:48.708 06:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:48.708 ************************************ 00:43:48.708 START TEST spdk_target_abort 00:43:48.708 ************************************ 00:43:48.708 06:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:48.708 06:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:48.708 06:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:48.708 06:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.708 06:34:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.996 spdk_targetn1 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.996 [2024-12-15 06:34:11.640334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.996 [2024-12-15 06:34:11.680547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:51.996 06:34:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:55.283 Initializing NVMe Controllers 00:43:55.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:55.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:55.283 Initialization complete. Launching workers. 00:43:55.283 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16080, failed: 0 00:43:55.283 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1405, failed to submit 14675 00:43:55.283 success 722, unsuccessful 683, failed 0 00:43:55.283 06:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:55.283 06:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:58.568 Initializing NVMe Controllers 00:43:58.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:58.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:58.568 Initialization complete. Launching workers. 00:43:58.568 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8587, failed: 0 00:43:58.568 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7340 00:43:58.568 success 294, unsuccessful 953, failed 0 00:43:58.568 06:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:58.568 06:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:01.854 Initializing NVMe Controllers 00:44:01.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:01.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:01.854 Initialization complete. Launching workers. 00:44:01.854 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38881, failed: 0 00:44:01.854 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2897, failed to submit 35984 00:44:01.854 success 633, unsuccessful 2264, failed 0 00:44:01.854 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:01.854 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.854 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.855 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.855 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:01.855 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.855 06:34:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1316911 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1316911 ']' 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1316911 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1316911 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1316911' 00:44:02.791 killing process with pid 1316911 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1316911 00:44:02.791 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1316911 00:44:03.050 00:44:03.050 real 0m14.141s 00:44:03.050 user 0m54.174s 00:44:03.050 sys 0m2.316s 00:44:03.050 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:03.050 06:34:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:03.050 ************************************ 00:44:03.050 END TEST spdk_target_abort 00:44:03.050 ************************************ 00:44:03.050 06:34:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:03.050 06:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:03.050 06:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:03.050 06:34:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:03.050 ************************************ 00:44:03.050 START TEST kernel_target_abort 00:44:03.050 ************************************ 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:03.050 06:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:05.586 Waiting for block devices as requested 00:44:05.844 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:05.844 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:05.844 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:06.103 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:06.103 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:06.103 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:06.103 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:06.362 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:06.362 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:06.362 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:06.621 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:06.621 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:06.621 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:06.880 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:06.880 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:06.880 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:06.880 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:07.139 No valid GPT data, bailing 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:07.139 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:07.140 00:44:07.140 Discovery Log Number of Records 2, Generation counter 2 00:44:07.140 =====Discovery Log Entry 0====== 00:44:07.140 trtype: tcp 00:44:07.140 adrfam: ipv4 00:44:07.140 subtype: current discovery subsystem 00:44:07.140 treq: not specified, sq flow control disable supported 00:44:07.140 portid: 1 00:44:07.140 trsvcid: 4420 00:44:07.140 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:07.140 traddr: 10.0.0.1 00:44:07.140 eflags: none 00:44:07.140 sectype: none 00:44:07.140 =====Discovery Log Entry 1====== 00:44:07.140 trtype: tcp 00:44:07.140 adrfam: ipv4 00:44:07.140 subtype: nvme subsystem 00:44:07.140 treq: not specified, sq flow control disable supported 00:44:07.140 portid: 1 00:44:07.140 trsvcid: 4420 00:44:07.140 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:07.140 traddr: 10.0.0.1 00:44:07.140 eflags: none 00:44:07.140 sectype: none 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:07.140 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:07.398 06:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:10.800 Initializing NVMe Controllers 00:44:10.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:10.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:10.800 Initialization complete. Launching workers. 00:44:10.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94827, failed: 0 00:44:10.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94827, failed to submit 0 00:44:10.800 success 0, unsuccessful 94827, failed 0 00:44:10.801 06:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:10.801 06:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:14.085 Initializing NVMe Controllers 00:44:14.085 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:14.085 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:14.085 Initialization complete. Launching workers. 00:44:14.085 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151160, failed: 0 00:44:14.085 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37894, failed to submit 113266 00:44:14.085 success 0, unsuccessful 37894, failed 0 00:44:14.085 06:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:14.085 06:34:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:16.618 Initializing NVMe Controllers 00:44:16.618 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:16.618 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:16.618 Initialization complete. Launching workers. 00:44:16.618 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142164, failed: 0 00:44:16.618 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35594, failed to submit 106570 00:44:16.618 success 0, unsuccessful 35594, failed 0 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:16.618 06:34:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:19.906 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:19.906 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:20.472 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:20.472 00:44:20.472 real 0m17.443s 00:44:20.472 user 0m9.173s 00:44:20.472 sys 0m4.978s 00:44:20.472 06:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:20.472 06:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:20.472 ************************************ 00:44:20.472 END TEST kernel_target_abort 00:44:20.472 ************************************ 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:20.472 rmmod nvme_tcp 00:44:20.472 rmmod nvme_fabrics 00:44:20.472 rmmod nvme_keyring 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1316911 ']' 00:44:20.472 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1316911 00:44:20.473 06:34:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1316911 ']' 00:44:20.473 06:34:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1316911 00:44:20.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1316911) - No such process 00:44:20.473 06:34:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1316911 is not found' 00:44:20.473 Process with pid 1316911 is not found 00:44:20.473 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:20.473 06:34:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:23.762 Waiting for block devices as requested 00:44:23.762 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:23.762 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:23.762 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:24.021 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:24.021 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:24.021 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:24.280 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:24.280 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:24.280 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:24.280 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:24.539 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:24.539 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:24.539 06:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:27.082 06:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:27.082 00:44:27.082 real 0m48.058s 00:44:27.082 user 1m7.671s 00:44:27.082 sys 0m15.941s 00:44:27.082 06:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:27.082 06:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:27.082 ************************************ 00:44:27.082 END TEST nvmf_abort_qd_sizes 00:44:27.082 ************************************ 00:44:27.082 06:34:46 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:27.082 06:34:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:27.082 06:34:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:27.082 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:44:27.082 ************************************ 00:44:27.082 START TEST keyring_file 00:44:27.082 ************************************ 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:27.082 * Looking for test storage... 00:44:27.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.082 --rc genhtml_branch_coverage=1 00:44:27.082 --rc genhtml_function_coverage=1 00:44:27.082 --rc genhtml_legend=1 00:44:27.082 --rc geninfo_all_blocks=1 00:44:27.082 --rc geninfo_unexecuted_blocks=1 00:44:27.082 00:44:27.082 ' 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.082 --rc genhtml_branch_coverage=1 00:44:27.082 --rc genhtml_function_coverage=1 00:44:27.082 --rc genhtml_legend=1 00:44:27.082 --rc geninfo_all_blocks=1 00:44:27.082 --rc geninfo_unexecuted_blocks=1 00:44:27.082 00:44:27.082 ' 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.082 --rc genhtml_branch_coverage=1 00:44:27.082 --rc genhtml_function_coverage=1 00:44:27.082 --rc genhtml_legend=1 00:44:27.082 --rc geninfo_all_blocks=1 00:44:27.082 --rc geninfo_unexecuted_blocks=1 00:44:27.082 00:44:27.082 ' 00:44:27.082 06:34:46 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:27.082 --rc genhtml_branch_coverage=1 00:44:27.082 --rc genhtml_function_coverage=1 00:44:27.082 --rc genhtml_legend=1 00:44:27.082 --rc geninfo_all_blocks=1 00:44:27.082 --rc geninfo_unexecuted_blocks=1 00:44:27.082 00:44:27.082 ' 00:44:27.082 06:34:46 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:27.082 06:34:46 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:27.082 06:34:46 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:27.082 06:34:46 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:27.082 06:34:46 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.082 06:34:46 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.083 06:34:46 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.083 06:34:46 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:27.083 06:34:46 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:27.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:27.083 06:34:46 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.REKjXfFC3C 00:44:27.083 06:34:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:27.083 06:34:46 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.REKjXfFC3C 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.REKjXfFC3C 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.REKjXfFC3C 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BYaVPKqbcq 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:27.083 06:34:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BYaVPKqbcq 00:44:27.083 06:34:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BYaVPKqbcq 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BYaVPKqbcq 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1325494 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:27.083 06:34:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1325494 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325494 ']' 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:27.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:27.083 06:34:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:27.083 [2024-12-15 06:34:47.140854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:27.083 [2024-12-15 06:34:47.140905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325494 ] 00:44:27.083 [2024-12-15 06:34:47.215477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.342 [2024-12-15 06:34:47.237723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.342 06:34:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:27.342 06:34:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:27.342 06:34:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:27.342 06:34:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.342 06:34:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:27.342 [2024-12-15 06:34:47.452197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.342 null0 00:44:27.601 [2024-12-15 06:34:47.484246] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:27.601 [2024-12-15 06:34:47.484535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.601 06:34:47 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:27.601 [2024-12-15 06:34:47.516317] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:27.601 request: 00:44:27.601 { 00:44:27.601 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:27.601 "secure_channel": false, 00:44:27.601 "listen_address": { 00:44:27.601 "trtype": "tcp", 00:44:27.601 "traddr": "127.0.0.1", 00:44:27.601 "trsvcid": "4420" 00:44:27.601 }, 00:44:27.601 "method": "nvmf_subsystem_add_listener", 00:44:27.601 "req_id": 1 00:44:27.601 } 00:44:27.601 Got JSON-RPC error response 00:44:27.601 response: 00:44:27.601 { 00:44:27.601 "code": -32602, 00:44:27.601 "message": "Invalid parameters" 00:44:27.601 } 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:27.601 06:34:47 keyring_file -- keyring/file.sh@47 -- # bperfpid=1325507 00:44:27.601 06:34:47 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1325507 /var/tmp/bperf.sock 00:44:27.601 06:34:47 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325507 ']' 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:27.601 06:34:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:27.601 [2024-12-15 06:34:47.570036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:27.601 [2024-12-15 06:34:47.570078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325507 ] 00:44:27.601 [2024-12-15 06:34:47.640222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.601 [2024-12-15 06:34:47.662058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:27.860 06:34:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:27.861 06:34:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:27.861 06:34:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:27.861 06:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:27.861 06:34:47 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BYaVPKqbcq 00:44:27.861 06:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BYaVPKqbcq 00:44:28.120 06:34:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:28.120 06:34:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:28.120 06:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.120 06:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:28.120 06:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.378 06:34:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.REKjXfFC3C == \/\t\m\p\/\t\m\p\.\R\E\K\j\X\f\F\C\3\C ]] 00:44:28.378 06:34:48 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:28.378 06:34:48 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:28.378 06:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.378 06:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:28.378 06:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.636 06:34:48 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BYaVPKqbcq == \/\t\m\p\/\t\m\p\.\B\Y\a\V\P\K\q\b\c\q ]] 00:44:28.636 06:34:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.636 06:34:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:28.636 06:34:48 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.636 06:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:28.895 06:34:48 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:28.895 06:34:48 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:28.895 06:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:29.153 [2024-12-15 06:34:49.087572] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:29.153 nvme0n1 00:44:29.153 06:34:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:29.153 06:34:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:29.153 06:34:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.153 06:34:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.153 06:34:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:29.153 06:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.412 06:34:49 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:29.412 06:34:49 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:29.412 06:34:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.412 06:34:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:29.412 06:34:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.412 06:34:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:29.412 06:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:29.670 06:34:49 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:29.670 06:34:49 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:29.670 Running I/O for 1 seconds... 00:44:30.606 19077.00 IOPS, 74.52 MiB/s 00:44:30.606 Latency(us) 00:44:30.606 [2024-12-15T05:34:50.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.607 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:30.607 nvme0n1 : 1.00 19127.62 74.72 0.00 0.00 6680.32 2699.46 12420.63 00:44:30.607 [2024-12-15T05:34:50.747Z] =================================================================================================================== 00:44:30.607 [2024-12-15T05:34:50.747Z] Total : 19127.62 74.72 0.00 0.00 6680.32 2699.46 12420.63 00:44:30.607 { 00:44:30.607 "results": [ 00:44:30.607 { 00:44:30.607 "job": "nvme0n1", 00:44:30.607 "core_mask": "0x2", 00:44:30.607 "workload": "randrw", 00:44:30.607 "percentage": 50, 00:44:30.607 "status": "finished", 00:44:30.607 "queue_depth": 128, 00:44:30.607 "io_size": 4096, 00:44:30.607 "runtime": 1.00415, 00:44:30.607 "iops": 19127.620375441915, 00:44:30.607 "mibps": 74.71726709156998, 00:44:30.607 "io_failed": 0, 00:44:30.607 "io_timeout": 0, 00:44:30.607 "avg_latency_us": 6680.324998177747, 00:44:30.607 "min_latency_us": 2699.4590476190474, 00:44:30.607 "max_latency_us": 12420.63238095238 00:44:30.607 } 00:44:30.607 ], 00:44:30.607 "core_count": 1 00:44:30.607 } 00:44:30.607 06:34:50 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:30.607 06:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:30.865 06:34:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:30.865 06:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.865 06:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.865 06:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.865 06:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.865 06:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:31.123 06:34:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:31.123 06:34:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:31.123 06:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:31.123 06:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.123 06:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:31.123 06:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.123 06:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.381 06:34:51 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:31.381 06:34:51 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:31.381 06:34:51 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:31.381 06:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:31.381 [2024-12-15 06:34:51.456630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:31.381 [2024-12-15 06:34:51.457100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2386dc0 (107): Transport endpoint is not connected 00:44:31.381 [2024-12-15 06:34:51.458094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2386dc0 (9): Bad file descriptor 00:44:31.381 [2024-12-15 06:34:51.459095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:31.381 [2024-12-15 06:34:51.459104] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:31.381 [2024-12-15 06:34:51.459112] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:31.381 [2024-12-15 06:34:51.459121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:31.381 request: 00:44:31.381 { 00:44:31.381 "name": "nvme0", 00:44:31.381 "trtype": "tcp", 00:44:31.381 "traddr": "127.0.0.1", 00:44:31.381 "adrfam": "ipv4", 00:44:31.381 "trsvcid": "4420", 00:44:31.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:31.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:31.381 "prchk_reftag": false, 00:44:31.381 "prchk_guard": false, 00:44:31.381 "hdgst": false, 00:44:31.381 "ddgst": false, 00:44:31.381 "psk": "key1", 00:44:31.381 "allow_unrecognized_csi": false, 00:44:31.381 "method": "bdev_nvme_attach_controller", 00:44:31.381 "req_id": 1 00:44:31.381 } 00:44:31.381 Got JSON-RPC error response 00:44:31.381 response: 00:44:31.381 { 00:44:31.381 "code": -5, 00:44:31.382 "message": "Input/output error" 00:44:31.382 } 00:44:31.382 06:34:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:31.382 06:34:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:31.382 06:34:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:31.382 06:34:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:31.382 06:34:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:31.382 06:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:31.382 06:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.382 06:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.382 06:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.382 06:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:31.640 06:34:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:31.640 06:34:51 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:31.640 06:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:31.640 06:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:31.640 06:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:31.640 06:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:31.640 06:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.898 06:34:51 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:31.898 06:34:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:31.898 06:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:32.156 06:34:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:32.156 06:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:32.156 06:34:52 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:32.156 06:34:52 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:32.156 06:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.415 06:34:52 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:32.415 06:34:52 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.REKjXfFC3C 00:44:32.415 06:34:52 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.415 06:34:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.415 06:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.674 [2024-12-15 06:34:52.634274] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.REKjXfFC3C': 0100660 00:44:32.674 [2024-12-15 06:34:52.634300] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:32.674 request: 00:44:32.674 { 00:44:32.674 "name": "key0", 00:44:32.674 "path": "/tmp/tmp.REKjXfFC3C", 00:44:32.674 "method": "keyring_file_add_key", 00:44:32.674 "req_id": 1 00:44:32.674 } 00:44:32.674 Got JSON-RPC error response 00:44:32.674 response: 00:44:32.674 { 00:44:32.674 "code": -1, 00:44:32.674 "message": "Operation not permitted" 00:44:32.674 } 00:44:32.674 06:34:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:32.674 06:34:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:32.674 06:34:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:32.674 06:34:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:32.674 06:34:52 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.REKjXfFC3C 00:44:32.674 06:34:52 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.674 06:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.REKjXfFC3C 00:44:32.932 06:34:52 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.REKjXfFC3C 00:44:32.932 06:34:52 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:32.932 06:34:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:32.932 06:34:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.932 06:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.932 06:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.932 06:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.932 06:34:53 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:32.932 06:34:53 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.932 06:34:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.933 06:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.192 [2024-12-15 06:34:53.215807] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.REKjXfFC3C': No such file or directory 00:44:33.192 [2024-12-15 06:34:53.215829] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:33.192 [2024-12-15 06:34:53.215844] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:33.192 [2024-12-15 06:34:53.215851] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:33.192 [2024-12-15 06:34:53.215858] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:33.192 [2024-12-15 06:34:53.215864] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:33.192 request: 00:44:33.192 { 00:44:33.192 "name": "nvme0", 00:44:33.192 "trtype": "tcp", 00:44:33.192 "traddr": "127.0.0.1", 00:44:33.192 "adrfam": "ipv4", 00:44:33.192 "trsvcid": "4420", 00:44:33.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:33.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:33.192 "prchk_reftag": false, 00:44:33.192 "prchk_guard": false, 00:44:33.192 "hdgst": false, 00:44:33.192 "ddgst": false, 00:44:33.192 "psk": "key0", 00:44:33.192 "allow_unrecognized_csi": false, 00:44:33.192 "method": "bdev_nvme_attach_controller", 00:44:33.192 "req_id": 1 00:44:33.192 } 00:44:33.192 Got JSON-RPC error response 00:44:33.192 response: 00:44:33.192 { 00:44:33.192 "code": -19, 00:44:33.192 "message": "No such device" 00:44:33.192 } 00:44:33.192 06:34:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:33.192 06:34:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:33.192 06:34:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:33.192 06:34:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:33.192 06:34:53 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:33.192 06:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:33.451 06:34:53 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mobtUTaA1U 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:33.451 06:34:53 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mobtUTaA1U 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mobtUTaA1U 00:44:33.451 06:34:53 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.mobtUTaA1U 00:44:33.451 06:34:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mobtUTaA1U 00:44:33.451 06:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mobtUTaA1U 00:44:33.710 06:34:53 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.710 06:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.968 nvme0n1 00:44:33.968 06:34:53 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:33.968 06:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:33.968 06:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.968 06:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.968 06:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.968 06:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.227 06:34:54 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:34.227 06:34:54 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:34.227 06:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:34.227 06:34:54 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:34.227 06:34:54 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:34.227 06:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.227 06:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:34.227 06:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.485 06:34:54 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:34.485 06:34:54 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:34.485 06:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:34.485 06:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:34.485 06:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:34.485 06:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.485 06:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:34.744 06:34:54 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:34.744 06:34:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:34.744 06:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:35.003 06:34:54 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:35.003 06:34:54 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:35.003 06:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:35.003 06:34:55 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:35.003 06:34:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mobtUTaA1U 00:44:35.003 06:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mobtUTaA1U 00:44:35.262 06:34:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BYaVPKqbcq 00:44:35.262 06:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BYaVPKqbcq 00:44:35.521 06:34:55 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:35.521 06:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:35.778 nvme0n1 00:44:35.778 06:34:55 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:35.779 06:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:36.037 06:34:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:36.037 "subsystems": [ 00:44:36.037 { 00:44:36.037 "subsystem": "keyring", 00:44:36.037 "config": [ 00:44:36.037 { 00:44:36.037 "method": "keyring_file_add_key", 00:44:36.037 "params": { 00:44:36.037 "name": "key0", 00:44:36.037 "path": "/tmp/tmp.mobtUTaA1U" 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "keyring_file_add_key", 00:44:36.037 "params": { 00:44:36.037 "name": "key1", 00:44:36.037 "path": "/tmp/tmp.BYaVPKqbcq" 00:44:36.037 } 00:44:36.037 } 00:44:36.037 ] 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "subsystem": "iobuf", 00:44:36.037 "config": [ 00:44:36.037 { 00:44:36.037 "method": "iobuf_set_options", 00:44:36.037 "params": { 00:44:36.037 "small_pool_count": 8192, 00:44:36.037 "large_pool_count": 1024, 00:44:36.037 "small_bufsize": 8192, 00:44:36.037 "large_bufsize": 135168, 00:44:36.037 "enable_numa": false 00:44:36.037 } 00:44:36.037 } 00:44:36.037 ] 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "subsystem": "sock", 00:44:36.037 "config": [ 00:44:36.037 { 00:44:36.037 "method": "sock_set_default_impl", 00:44:36.037 "params": { 00:44:36.037 "impl_name": "posix" 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "sock_impl_set_options", 00:44:36.037 "params": { 00:44:36.037 "impl_name": "ssl", 00:44:36.037 "recv_buf_size": 4096, 00:44:36.037 "send_buf_size": 4096, 00:44:36.037 "enable_recv_pipe": true, 00:44:36.037 "enable_quickack": false, 00:44:36.037 "enable_placement_id": 0, 00:44:36.037 "enable_zerocopy_send_server": true, 00:44:36.037 "enable_zerocopy_send_client": false, 00:44:36.037 "zerocopy_threshold": 0, 00:44:36.037 "tls_version": 0, 00:44:36.037 "enable_ktls": false 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "sock_impl_set_options", 00:44:36.037 "params": { 00:44:36.037 "impl_name": "posix", 00:44:36.037 "recv_buf_size": 2097152, 00:44:36.037 "send_buf_size": 2097152, 00:44:36.037 "enable_recv_pipe": true, 00:44:36.037 "enable_quickack": false, 00:44:36.037 "enable_placement_id": 0, 00:44:36.037 "enable_zerocopy_send_server": true, 00:44:36.037 "enable_zerocopy_send_client": false, 00:44:36.037 "zerocopy_threshold": 0, 00:44:36.037 "tls_version": 0, 00:44:36.037 "enable_ktls": false 00:44:36.037 } 00:44:36.037 } 00:44:36.037 ] 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "subsystem": "vmd", 00:44:36.037 "config": [] 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "subsystem": "accel", 00:44:36.037 "config": [ 00:44:36.037 { 00:44:36.037 "method": "accel_set_options", 00:44:36.037 "params": { 00:44:36.037 "small_cache_size": 128, 00:44:36.037 "large_cache_size": 16, 00:44:36.037 "task_count": 2048, 00:44:36.037 "sequence_count": 2048, 00:44:36.037 "buf_count": 2048 00:44:36.037 } 00:44:36.037 } 00:44:36.037 ] 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "subsystem": "bdev", 00:44:36.037 "config": [ 00:44:36.037 { 00:44:36.037 "method": "bdev_set_options", 00:44:36.037 "params": { 00:44:36.037 "bdev_io_pool_size": 65535, 00:44:36.037 "bdev_io_cache_size": 256, 00:44:36.037 "bdev_auto_examine": true, 00:44:36.037 "iobuf_small_cache_size": 128, 00:44:36.037 "iobuf_large_cache_size": 16 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "bdev_raid_set_options", 00:44:36.037 "params": { 00:44:36.037 "process_window_size_kb": 1024, 00:44:36.037 "process_max_bandwidth_mb_sec": 0 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "bdev_iscsi_set_options", 00:44:36.037 "params": { 00:44:36.037 "timeout_sec": 30 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "bdev_nvme_set_options", 00:44:36.037 "params": { 00:44:36.037 "action_on_timeout": "none", 00:44:36.037 "timeout_us": 0, 00:44:36.037 "timeout_admin_us": 0, 00:44:36.037 "keep_alive_timeout_ms": 10000, 00:44:36.037 "arbitration_burst": 0, 00:44:36.037 "low_priority_weight": 0, 00:44:36.037 "medium_priority_weight": 0, 00:44:36.037 "high_priority_weight": 0, 00:44:36.037 "nvme_adminq_poll_period_us": 10000, 00:44:36.037 "nvme_ioq_poll_period_us": 0, 00:44:36.037 "io_queue_requests": 512, 00:44:36.037 "delay_cmd_submit": true, 00:44:36.037 "transport_retry_count": 4, 00:44:36.037 "bdev_retry_count": 3, 00:44:36.037 "transport_ack_timeout": 0, 00:44:36.037 "ctrlr_loss_timeout_sec": 0, 00:44:36.037 "reconnect_delay_sec": 0, 00:44:36.037 "fast_io_fail_timeout_sec": 0, 00:44:36.037 "disable_auto_failback": false, 00:44:36.037 "generate_uuids": false, 00:44:36.037 "transport_tos": 0, 00:44:36.037 "nvme_error_stat": false, 00:44:36.037 "rdma_srq_size": 0, 00:44:36.037 "io_path_stat": false, 00:44:36.037 "allow_accel_sequence": false, 00:44:36.037 "rdma_max_cq_size": 0, 00:44:36.037 "rdma_cm_event_timeout_ms": 0, 00:44:36.037 "dhchap_digests": [ 00:44:36.037 "sha256", 00:44:36.037 "sha384", 00:44:36.037 "sha512" 00:44:36.037 ], 00:44:36.037 "dhchap_dhgroups": [ 00:44:36.037 "null", 00:44:36.037 "ffdhe2048", 00:44:36.037 "ffdhe3072", 00:44:36.037 "ffdhe4096", 00:44:36.037 "ffdhe6144", 00:44:36.037 "ffdhe8192" 00:44:36.037 ], 00:44:36.037 "rdma_umr_per_io": false 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "bdev_nvme_attach_controller", 00:44:36.037 "params": { 00:44:36.037 "name": "nvme0", 00:44:36.037 "trtype": "TCP", 00:44:36.037 "adrfam": "IPv4", 00:44:36.037 "traddr": "127.0.0.1", 00:44:36.037 "trsvcid": "4420", 00:44:36.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.037 "prchk_reftag": false, 00:44:36.037 "prchk_guard": false, 00:44:36.037 "ctrlr_loss_timeout_sec": 0, 00:44:36.037 "reconnect_delay_sec": 0, 00:44:36.037 "fast_io_fail_timeout_sec": 0, 00:44:36.037 "psk": "key0", 00:44:36.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:36.037 "hdgst": false, 00:44:36.037 "ddgst": false, 00:44:36.037 "multipath": "multipath" 00:44:36.037 } 00:44:36.037 }, 00:44:36.037 { 00:44:36.037 "method": "bdev_nvme_set_hotplug", 00:44:36.038 "params": { 00:44:36.038 "period_us": 100000, 00:44:36.038 "enable": false 00:44:36.038 } 00:44:36.038 }, 00:44:36.038 { 00:44:36.038 "method": "bdev_wait_for_examine" 00:44:36.038 } 00:44:36.038 ] 00:44:36.038 }, 00:44:36.038 { 00:44:36.038 "subsystem": "nbd", 00:44:36.038 "config": [] 00:44:36.038 } 00:44:36.038 ] 00:44:36.038 }' 00:44:36.038 06:34:56 keyring_file -- keyring/file.sh@115 -- # killprocess 1325507 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325507 ']' 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325507 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325507 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325507' 00:44:36.038 killing process with pid 1325507 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@973 -- # kill 1325507 00:44:36.038 Received shutdown signal, test time was about 1.000000 seconds 00:44:36.038 00:44:36.038 Latency(us) 00:44:36.038 [2024-12-15T05:34:56.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:36.038 [2024-12-15T05:34:56.178Z] =================================================================================================================== 00:44:36.038 [2024-12-15T05:34:56.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:36.038 06:34:56 keyring_file -- common/autotest_common.sh@978 -- # wait 1325507 00:44:36.296 06:34:56 keyring_file -- keyring/file.sh@118 -- # bperfpid=1326983 00:44:36.296 06:34:56 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1326983 /var/tmp/bperf.sock 00:44:36.296 06:34:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1326983 ']' 00:44:36.296 06:34:56 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:36.296 06:34:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:36.296 06:34:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:36.296 06:34:56 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:36.296 "subsystems": [ 00:44:36.296 { 00:44:36.296 "subsystem": "keyring", 00:44:36.296 "config": [ 00:44:36.296 { 00:44:36.296 "method": "keyring_file_add_key", 00:44:36.296 "params": { 00:44:36.296 "name": "key0", 00:44:36.296 "path": "/tmp/tmp.mobtUTaA1U" 00:44:36.296 } 00:44:36.296 }, 00:44:36.296 { 00:44:36.296 "method": "keyring_file_add_key", 00:44:36.296 "params": { 00:44:36.296 "name": "key1", 00:44:36.296 "path": "/tmp/tmp.BYaVPKqbcq" 00:44:36.296 } 00:44:36.296 } 00:44:36.296 ] 00:44:36.296 }, 00:44:36.296 { 00:44:36.296 "subsystem": "iobuf", 00:44:36.296 "config": [ 00:44:36.296 { 00:44:36.296 "method": "iobuf_set_options", 00:44:36.296 "params": { 00:44:36.296 "small_pool_count": 8192, 00:44:36.296 "large_pool_count": 1024, 00:44:36.296 "small_bufsize": 8192, 00:44:36.296 "large_bufsize": 135168, 00:44:36.296 "enable_numa": false 00:44:36.296 } 00:44:36.296 } 00:44:36.296 ] 00:44:36.296 }, 00:44:36.296 { 00:44:36.296 "subsystem": "sock", 00:44:36.297 "config": [ 00:44:36.297 { 00:44:36.297 "method": "sock_set_default_impl", 00:44:36.297 "params": { 00:44:36.297 "impl_name": "posix" 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "sock_impl_set_options", 00:44:36.297 "params": { 00:44:36.297 "impl_name": "ssl", 00:44:36.297 "recv_buf_size": 4096, 00:44:36.297 "send_buf_size": 4096, 00:44:36.297 "enable_recv_pipe": true, 00:44:36.297 "enable_quickack": false, 00:44:36.297 "enable_placement_id": 0, 00:44:36.297 "enable_zerocopy_send_server": true, 00:44:36.297 "enable_zerocopy_send_client": false, 00:44:36.297 "zerocopy_threshold": 0, 00:44:36.297 "tls_version": 0, 00:44:36.297 "enable_ktls": false 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "sock_impl_set_options", 00:44:36.297 "params": { 00:44:36.297 "impl_name": "posix", 00:44:36.297 "recv_buf_size": 2097152, 00:44:36.297 "send_buf_size": 2097152, 00:44:36.297 "enable_recv_pipe": true, 00:44:36.297 "enable_quickack": false, 00:44:36.297 "enable_placement_id": 0, 00:44:36.297 "enable_zerocopy_send_server": true, 00:44:36.297 "enable_zerocopy_send_client": false, 00:44:36.297 "zerocopy_threshold": 0, 00:44:36.297 "tls_version": 0, 00:44:36.297 "enable_ktls": false 00:44:36.297 } 00:44:36.297 } 00:44:36.297 ] 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "subsystem": "vmd", 00:44:36.297 "config": [] 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "subsystem": "accel", 00:44:36.297 "config": [ 00:44:36.297 { 00:44:36.297 "method": "accel_set_options", 00:44:36.297 "params": { 00:44:36.297 "small_cache_size": 128, 00:44:36.297 "large_cache_size": 16, 00:44:36.297 "task_count": 2048, 00:44:36.297 "sequence_count": 2048, 00:44:36.297 "buf_count": 2048 00:44:36.297 } 00:44:36.297 } 00:44:36.297 ] 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "subsystem": "bdev", 00:44:36.297 "config": [ 00:44:36.297 { 00:44:36.297 "method": "bdev_set_options", 00:44:36.297 "params": { 00:44:36.297 "bdev_io_pool_size": 65535, 00:44:36.297 "bdev_io_cache_size": 256, 00:44:36.297 "bdev_auto_examine": true, 00:44:36.297 "iobuf_small_cache_size": 128, 00:44:36.297 "iobuf_large_cache_size": 16 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_raid_set_options", 00:44:36.297 "params": { 00:44:36.297 "process_window_size_kb": 1024, 00:44:36.297 "process_max_bandwidth_mb_sec": 0 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_iscsi_set_options", 00:44:36.297 "params": { 00:44:36.297 "timeout_sec": 30 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_nvme_set_options", 00:44:36.297 "params": { 00:44:36.297 "action_on_timeout": "none", 00:44:36.297 "timeout_us": 0, 00:44:36.297 "timeout_admin_us": 0, 00:44:36.297 "keep_alive_timeout_ms": 10000, 00:44:36.297 "arbitration_burst": 0, 00:44:36.297 "low_priority_weight": 0, 00:44:36.297 "medium_priority_weight": 0, 00:44:36.297 "high_priority_weight": 0, 00:44:36.297 "nvme_adminq_poll_period_us": 10000, 00:44:36.297 "nvme_ioq_poll_period_us": 0, 00:44:36.297 "io_queue_requests": 512, 00:44:36.297 "delay_cmd_submit": true, 00:44:36.297 "transport_retry_count": 4, 00:44:36.297 "bdev_retry_count": 3, 00:44:36.297 "transport_ack_timeout": 0, 00:44:36.297 "ctrlr_loss_timeout_sec": 0, 00:44:36.297 "reconnect_delay_sec": 0, 00:44:36.297 "fast_io_fail_timeout_sec": 0, 00:44:36.297 "disable_auto_failback": false, 00:44:36.297 "generate_uuids": false, 00:44:36.297 "transport_tos": 0, 00:44:36.297 "nvme_error_stat": false, 00:44:36.297 "rdma_srq_size": 0, 00:44:36.297 06:34:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:36.297 "io_path_stat": false, 00:44:36.297 "allow_accel_sequence": false, 00:44:36.297 "rdma_max_cq_size": 0, 00:44:36.297 "rdma_cm_event_timeout_ms": 0, 00:44:36.297 "dhchap_digests": [ 00:44:36.297 "sha256", 00:44:36.297 "sha384", 00:44:36.297 "sha512" 00:44:36.297 ], 00:44:36.297 "dhchap_dhgroups": [ 00:44:36.297 "null", 00:44:36.297 "ffdhe2048", 00:44:36.297 "ffdhe3072", 00:44:36.297 "ffdhe4096", 00:44:36.297 "ffdhe6144", 00:44:36.297 "ffdhe8192" 00:44:36.297 ], 00:44:36.297 "rdma_umr_per_io": false 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_nvme_attach_controller", 00:44:36.297 "params": { 00:44:36.297 "name": "nvme0", 00:44:36.297 "trtype": "TCP", 00:44:36.297 "adrfam": "IPv4", 00:44:36.297 "traddr": "127.0.0.1", 00:44:36.297 "trsvcid": "4420", 00:44:36.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.297 "prchk_reftag": false, 00:44:36.297 "prchk_guard": false, 00:44:36.297 "ctrlr_loss_timeout_sec": 0, 00:44:36.297 "reconnect_delay_sec": 0, 00:44:36.297 "fast_io_fail_timeout_sec": 0, 00:44:36.297 "psk": "key0", 00:44:36.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:36.297 "hdgst": false, 00:44:36.297 "ddgst": false, 00:44:36.297 "multipath": "multipath" 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_nvme_set_hotplug", 00:44:36.297 "params": { 00:44:36.297 "period_us": 100000, 00:44:36.297 "enable": false 00:44:36.297 } 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "method": "bdev_wait_for_examine" 00:44:36.297 } 00:44:36.297 ] 00:44:36.297 }, 00:44:36.297 { 00:44:36.297 "subsystem": "nbd", 00:44:36.297 "config": [] 00:44:36.297 } 00:44:36.297 ] 00:44:36.297 }' 00:44:36.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:36.297 06:34:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:36.297 06:34:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.297 [2024-12-15 06:34:56.249549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:36.297 [2024-12-15 06:34:56.249594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326983 ] 00:44:36.297 [2024-12-15 06:34:56.322358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:36.297 [2024-12-15 06:34:56.343822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.556 [2024-12-15 06:34:56.499064] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:37.123 06:34:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:37.123 06:34:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:37.123 06:34:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:37.123 06:34:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:37.123 06:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.380 06:34:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:37.380 06:34:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:37.380 06:34:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:37.380 06:34:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.381 06:34:57 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:37.381 06:34:57 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.381 06:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.639 06:34:57 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:37.639 06:34:57 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:37.639 06:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:37.639 06:34:57 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:37.897 06:34:57 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:37.897 06:34:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:37.897 06:34:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mobtUTaA1U /tmp/tmp.BYaVPKqbcq 00:44:37.897 06:34:57 keyring_file -- keyring/file.sh@20 -- # killprocess 1326983 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1326983 ']' 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1326983 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326983 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326983' 00:44:37.897 killing process with pid 1326983 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@973 -- # kill 1326983 00:44:37.897 Received shutdown signal, test time was about 1.000000 seconds 00:44:37.897 00:44:37.897 Latency(us) 00:44:37.897 [2024-12-15T05:34:58.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:37.897 [2024-12-15T05:34:58.037Z] =================================================================================================================== 00:44:37.897 [2024-12-15T05:34:58.037Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:37.897 06:34:57 keyring_file -- common/autotest_common.sh@978 -- # wait 1326983 00:44:38.156 06:34:58 keyring_file -- keyring/file.sh@21 -- # killprocess 1325494 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325494 ']' 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325494 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325494 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325494' 00:44:38.156 killing process with pid 1325494 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@973 -- # kill 1325494 00:44:38.156 06:34:58 keyring_file -- common/autotest_common.sh@978 -- # wait 1325494 00:44:38.415 00:44:38.415 real 0m11.644s 00:44:38.415 user 0m28.945s 00:44:38.415 sys 0m2.685s 00:44:38.415 06:34:58 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.415 06:34:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:38.415 ************************************ 00:44:38.415 END TEST keyring_file 00:44:38.415 ************************************ 00:44:38.415 06:34:58 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:38.415 06:34:58 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:38.415 06:34:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:38.415 06:34:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.415 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:44:38.415 ************************************ 00:44:38.415 START TEST keyring_linux 00:44:38.415 ************************************ 00:44:38.415 06:34:58 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:38.415 Joined session keyring: 353490147 00:44:38.675 * Looking for test storage... 00:44:38.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.675 --rc genhtml_branch_coverage=1 00:44:38.675 --rc genhtml_function_coverage=1 00:44:38.675 --rc genhtml_legend=1 00:44:38.675 --rc geninfo_all_blocks=1 00:44:38.675 --rc geninfo_unexecuted_blocks=1 00:44:38.675 00:44:38.675 ' 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.675 --rc genhtml_branch_coverage=1 00:44:38.675 --rc genhtml_function_coverage=1 00:44:38.675 --rc genhtml_legend=1 00:44:38.675 --rc geninfo_all_blocks=1 00:44:38.675 --rc geninfo_unexecuted_blocks=1 00:44:38.675 00:44:38.675 ' 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.675 --rc genhtml_branch_coverage=1 00:44:38.675 --rc genhtml_function_coverage=1 00:44:38.675 --rc genhtml_legend=1 00:44:38.675 --rc geninfo_all_blocks=1 00:44:38.675 --rc geninfo_unexecuted_blocks=1 00:44:38.675 00:44:38.675 ' 00:44:38.675 06:34:58 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.675 --rc genhtml_branch_coverage=1 00:44:38.675 --rc genhtml_function_coverage=1 00:44:38.675 --rc genhtml_legend=1 00:44:38.675 --rc geninfo_all_blocks=1 00:44:38.675 --rc geninfo_unexecuted_blocks=1 00:44:38.675 00:44:38.675 ' 00:44:38.675 06:34:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:38.675 06:34:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.675 06:34:58 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.675 06:34:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.675 06:34:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.675 06:34:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.675 06:34:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:38.675 06:34:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.675 06:34:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:38.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:38.676 /tmp/:spdk-test:key0 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:38.676 06:34:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:38.676 06:34:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:38.676 /tmp/:spdk-test:key1 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1327526 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1327526 00:44:38.676 06:34:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327526 ']' 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:38.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:38.676 06:34:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:38.935 [2024-12-15 06:34:58.852230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:38.935 [2024-12-15 06:34:58.852280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327526 ] 00:44:38.935 [2024-12-15 06:34:58.927392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.935 [2024-12-15 06:34:58.949884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.193 06:34:59 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:39.193 06:34:59 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:39.193 06:34:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:39.194 [2024-12-15 06:34:59.151329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:39.194 null0 00:44:39.194 [2024-12-15 06:34:59.183387] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:39.194 [2024-12-15 06:34:59.183663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:39.194 06:34:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:39.194 370352451 00:44:39.194 06:34:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:39.194 495034399 00:44:39.194 06:34:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1327534 00:44:39.194 06:34:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:39.194 06:34:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1327534 /var/tmp/bperf.sock 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327534 ']' 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:39.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:39.194 06:34:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:39.194 [2024-12-15 06:34:59.252001] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:39.194 [2024-12-15 06:34:59.252042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327534 ] 00:44:39.194 [2024-12-15 06:34:59.324837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.452 [2024-12-15 06:34:59.346968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:39.452 06:34:59 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:39.452 06:34:59 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:39.452 06:34:59 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:39.452 06:34:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:39.712 06:34:59 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:39.712 06:34:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:39.712 06:34:59 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:39.712 06:34:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:39.971 [2024-12-15 06:35:00.014239] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:39.971 nvme0n1 00:44:39.971 06:35:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:39.971 06:35:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:39.971 06:35:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:39.971 06:35:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:39.971 06:35:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.971 06:35:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:40.230 06:35:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:40.230 06:35:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:40.230 06:35:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:40.230 06:35:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:40.230 06:35:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.230 06:35:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:40.230 06:35:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@25 -- # sn=370352451 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 370352451 == \3\7\0\3\5\2\4\5\1 ]] 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 370352451 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:40.489 06:35:00 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:40.489 Running I/O for 1 seconds... 00:44:41.868 21448.00 IOPS, 83.78 MiB/s 00:44:41.868 Latency(us) 00:44:41.868 [2024-12-15T05:35:02.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:41.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:41.868 nvme0n1 : 1.01 21450.43 83.79 0.00 0.00 5947.75 5055.63 10922.67 00:44:41.868 [2024-12-15T05:35:02.008Z] =================================================================================================================== 00:44:41.868 [2024-12-15T05:35:02.008Z] Total : 21450.43 83.79 0.00 0.00 5947.75 5055.63 10922.67 00:44:41.868 { 00:44:41.868 "results": [ 00:44:41.868 { 00:44:41.868 "job": "nvme0n1", 00:44:41.868 "core_mask": "0x2", 00:44:41.868 "workload": "randread", 00:44:41.868 "status": "finished", 00:44:41.868 "queue_depth": 128, 00:44:41.868 "io_size": 4096, 00:44:41.868 "runtime": 1.005854, 00:44:41.868 "iops": 21450.429187536163, 00:44:41.868 "mibps": 83.79073901381314, 00:44:41.868 "io_failed": 0, 00:44:41.868 "io_timeout": 0, 00:44:41.868 "avg_latency_us": 5947.754758196939, 00:44:41.868 "min_latency_us": 5055.634285714285, 00:44:41.868 "max_latency_us": 10922.666666666666 00:44:41.868 } 00:44:41.868 ], 00:44:41.868 "core_count": 1 00:44:41.868 } 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:41.868 06:35:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:41.868 06:35:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.868 06:35:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:42.127 06:35:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:42.127 [2024-12-15 06:35:02.222376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:42.127 [2024-12-15 06:35:02.222719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b3d0 (107): Transport endpoint is not connected 00:44:42.127 [2024-12-15 06:35:02.223713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b3d0 (9): Bad file descriptor 00:44:42.127 [2024-12-15 06:35:02.224714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:42.127 [2024-12-15 06:35:02.224724] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:42.127 [2024-12-15 06:35:02.224731] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:42.127 [2024-12-15 06:35:02.224739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:42.127 request: 00:44:42.127 { 00:44:42.127 "name": "nvme0", 00:44:42.127 "trtype": "tcp", 00:44:42.127 "traddr": "127.0.0.1", 00:44:42.127 "adrfam": "ipv4", 00:44:42.127 "trsvcid": "4420", 00:44:42.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:42.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:42.127 "prchk_reftag": false, 00:44:42.127 "prchk_guard": false, 00:44:42.127 "hdgst": false, 00:44:42.127 "ddgst": false, 00:44:42.127 "psk": ":spdk-test:key1", 00:44:42.127 "allow_unrecognized_csi": false, 00:44:42.127 "method": "bdev_nvme_attach_controller", 00:44:42.127 "req_id": 1 00:44:42.127 } 00:44:42.127 Got JSON-RPC error response 00:44:42.127 response: 00:44:42.127 { 00:44:42.127 "code": -5, 00:44:42.127 "message": "Input/output error" 00:44:42.127 } 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:42.127 06:35:02 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@33 -- # sn=370352451 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 370352451 00:44:42.127 1 links removed 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@33 -- # sn=495034399 00:44:42.127 06:35:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 495034399 00:44:42.127 1 links removed 00:44:42.128 06:35:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1327534 00:44:42.128 06:35:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327534 ']' 00:44:42.128 06:35:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327534 00:44:42.128 06:35:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:42.128 06:35:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:42.128 06:35:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327534 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327534' 00:44:42.387 killing process with pid 1327534 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327534 00:44:42.387 Received shutdown signal, test time was about 1.000000 seconds 00:44:42.387 00:44:42.387 Latency(us) 00:44:42.387 [2024-12-15T05:35:02.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:42.387 [2024-12-15T05:35:02.527Z] =================================================================================================================== 00:44:42.387 [2024-12-15T05:35:02.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327534 00:44:42.387 06:35:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1327526 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327526 ']' 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327526 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327526 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327526' 00:44:42.387 killing process with pid 1327526 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327526 00:44:42.387 06:35:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327526 00:44:42.957 00:44:42.957 real 0m4.315s 00:44:42.957 user 0m8.120s 00:44:42.957 sys 0m1.482s 00:44:42.957 06:35:02 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:42.957 06:35:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:42.957 ************************************ 00:44:42.957 END TEST keyring_linux 00:44:42.957 ************************************ 00:44:42.957 06:35:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:42.957 06:35:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:42.957 06:35:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:42.957 06:35:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:42.957 06:35:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:42.957 06:35:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:42.957 06:35:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:42.957 06:35:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:42.957 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:44:42.957 06:35:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:42.957 06:35:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:42.957 06:35:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:42.957 06:35:02 -- common/autotest_common.sh@10 -- # set +x 00:44:48.229 INFO: APP EXITING 00:44:48.229 INFO: killing all VMs 00:44:48.229 INFO: killing vhost app 00:44:48.229 INFO: EXIT DONE 00:44:50.864 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:50.864 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:50.864 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:51.123 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:51.382 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:51.382 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:54.676 Cleaning 00:44:54.676 Removing: /var/run/dpdk/spdk0/config 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:54.676 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:54.676 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:54.676 Removing: /var/run/dpdk/spdk1/config 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:54.676 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:54.676 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:54.676 Removing: /var/run/dpdk/spdk2/config 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:54.676 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:54.676 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:54.676 Removing: /var/run/dpdk/spdk3/config 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:54.676 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:54.676 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:54.676 Removing: /var/run/dpdk/spdk4/config 00:44:54.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:54.677 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:54.677 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:54.677 Removing: /dev/shm/bdev_svc_trace.1 00:44:54.677 Removing: /dev/shm/nvmf_trace.0 00:44:54.677 Removing: /dev/shm/spdk_tgt_trace.pid772887 00:44:54.677 Removing: /var/run/dpdk/spdk0 00:44:54.677 Removing: /var/run/dpdk/spdk1 00:44:54.677 Removing: /var/run/dpdk/spdk2 00:44:54.677 Removing: /var/run/dpdk/spdk3 00:44:54.677 Removing: /var/run/dpdk/spdk4 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1010510 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1014935 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1016495 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1018280 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1018498 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1018523 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1018747 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1019241 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1020936 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1021764 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1022111 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1024299 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1024667 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1025269 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1029248 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1034557 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1034559 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1034561 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1038383 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1042265 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1047532 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1082789 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1086826 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1092661 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1093758 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1095267 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1096442 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1100953 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1105212 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1109164 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1116414 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1116416 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1120881 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1121064 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1121328 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1121726 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1121803 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1123147 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1124984 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1126943 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1128502 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1130196 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1131820 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1137565 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1138123 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1139820 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1140831 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1146433 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1149078 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1154265 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1159624 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1168666 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1175537 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1175584 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1193789 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1194251 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1194917 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1195376 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1196109 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1196578 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1197178 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1197711 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1201683 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1202019 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1207837 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1208103 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1213769 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1217930 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1227434 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1227950 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1232071 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1232309 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1236438 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1241988 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1244485 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1254210 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1263267 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1264831 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1265719 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1281537 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1285278 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1287945 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1295573 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1295677 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1300621 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1302509 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1304757 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1305978 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1307910 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1308945 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1317518 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1317964 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1318412 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1320642 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1321184 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1321717 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1325494 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1325507 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1326983 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1327526 00:44:54.677 Removing: /var/run/dpdk/spdk_pid1327534 00:44:54.677 Removing: /var/run/dpdk/spdk_pid770797 00:44:54.677 Removing: /var/run/dpdk/spdk_pid771829 00:44:54.677 Removing: /var/run/dpdk/spdk_pid772887 00:44:54.677 Removing: /var/run/dpdk/spdk_pid773512 00:44:54.677 Removing: /var/run/dpdk/spdk_pid774434 00:44:54.677 Removing: /var/run/dpdk/spdk_pid774473 00:44:54.677 Removing: /var/run/dpdk/spdk_pid775506 00:44:54.677 Removing: /var/run/dpdk/spdk_pid775624 00:44:54.677 Removing: /var/run/dpdk/spdk_pid775948 00:44:54.677 Removing: /var/run/dpdk/spdk_pid777448 00:44:54.677 Removing: /var/run/dpdk/spdk_pid778693 00:44:54.677 Removing: /var/run/dpdk/spdk_pid778976 00:44:54.677 Removing: /var/run/dpdk/spdk_pid779259 00:44:54.677 Removing: /var/run/dpdk/spdk_pid779557 00:44:54.677 Removing: /var/run/dpdk/spdk_pid779704 00:44:54.677 Removing: /var/run/dpdk/spdk_pid779892 00:44:54.677 Removing: /var/run/dpdk/spdk_pid780133 00:44:54.677 Removing: /var/run/dpdk/spdk_pid780410 00:44:54.677 Removing: /var/run/dpdk/spdk_pid781144 00:44:54.677 Removing: /var/run/dpdk/spdk_pid784065 00:44:54.677 Removing: /var/run/dpdk/spdk_pid784317 00:44:54.677 Removing: /var/run/dpdk/spdk_pid784565 00:44:54.677 Removing: /var/run/dpdk/spdk_pid784572 00:44:54.677 Removing: /var/run/dpdk/spdk_pid785058 00:44:54.935 Removing: /var/run/dpdk/spdk_pid785061 00:44:54.935 Removing: /var/run/dpdk/spdk_pid785550 00:44:54.935 Removing: /var/run/dpdk/spdk_pid785673 00:44:54.935 Removing: /var/run/dpdk/spdk_pid786019 00:44:54.935 Removing: /var/run/dpdk/spdk_pid786029 00:44:54.935 Removing: /var/run/dpdk/spdk_pid786275 00:44:54.935 Removing: /var/run/dpdk/spdk_pid786291 00:44:54.935 Removing: /var/run/dpdk/spdk_pid786838 00:44:54.935 Removing: /var/run/dpdk/spdk_pid787083 00:44:54.935 Removing: /var/run/dpdk/spdk_pid787372 00:44:54.935 Removing: /var/run/dpdk/spdk_pid791018 00:44:54.935 Removing: /var/run/dpdk/spdk_pid795339 00:44:54.935 Removing: /var/run/dpdk/spdk_pid805755 00:44:54.935 Removing: /var/run/dpdk/spdk_pid806430 00:44:54.935 Removing: /var/run/dpdk/spdk_pid810626 00:44:54.935 Removing: /var/run/dpdk/spdk_pid811030 00:44:54.935 Removing: /var/run/dpdk/spdk_pid815276 00:44:54.935 Removing: /var/run/dpdk/spdk_pid821043 00:44:54.935 Removing: /var/run/dpdk/spdk_pid823582 00:44:54.935 Removing: /var/run/dpdk/spdk_pid833617 00:44:54.935 Removing: /var/run/dpdk/spdk_pid842670 00:44:54.935 Removing: /var/run/dpdk/spdk_pid844749 00:44:54.935 Removing: /var/run/dpdk/spdk_pid845737 00:44:54.935 Removing: /var/run/dpdk/spdk_pid862319 00:44:54.935 Removing: /var/run/dpdk/spdk_pid866322 00:44:54.935 Removing: /var/run/dpdk/spdk_pid947523 00:44:54.935 Removing: /var/run/dpdk/spdk_pid952768 00:44:54.935 Removing: /var/run/dpdk/spdk_pid958363 00:44:54.935 Removing: /var/run/dpdk/spdk_pid964592 00:44:54.935 Removing: /var/run/dpdk/spdk_pid964621 00:44:54.936 Removing: /var/run/dpdk/spdk_pid965485 00:44:54.936 Removing: /var/run/dpdk/spdk_pid966401 00:44:54.936 Removing: /var/run/dpdk/spdk_pid967390 00:44:54.936 Removing: /var/run/dpdk/spdk_pid968231 00:44:54.936 Removing: /var/run/dpdk/spdk_pid968314 00:44:54.936 Removing: /var/run/dpdk/spdk_pid968623 00:44:54.936 Removing: /var/run/dpdk/spdk_pid968681 00:44:54.936 Removing: /var/run/dpdk/spdk_pid968685 00:44:54.936 Removing: /var/run/dpdk/spdk_pid969577 00:44:54.936 Removing: /var/run/dpdk/spdk_pid970464 00:44:54.936 Removing: /var/run/dpdk/spdk_pid971353 00:44:54.936 Removing: /var/run/dpdk/spdk_pid971807 00:44:54.936 Removing: /var/run/dpdk/spdk_pid971815 00:44:54.936 Removing: /var/run/dpdk/spdk_pid972061 00:44:54.936 Removing: /var/run/dpdk/spdk_pid973200 00:44:54.936 Removing: /var/run/dpdk/spdk_pid974203 00:44:54.936 Removing: /var/run/dpdk/spdk_pid982116 00:44:54.936 Clean 00:44:54.936 06:35:15 -- common/autotest_common.sh@1453 -- # return 0 00:44:54.936 06:35:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:54.936 06:35:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.936 06:35:15 -- common/autotest_common.sh@10 -- # set +x 00:44:55.194 06:35:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:55.194 06:35:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:55.194 06:35:15 -- common/autotest_common.sh@10 -- # set +x 00:44:55.194 06:35:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:55.194 06:35:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:55.194 06:35:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:55.194 06:35:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:55.194 06:35:15 -- spdk/autotest.sh@398 -- # hostname 00:44:55.194 06:35:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:55.194 geninfo: WARNING: invalid characters removed from testname! 00:45:17.133 06:35:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:18.510 06:35:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:20.414 06:35:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:22.418 06:35:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:24.322 06:35:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:26.225 06:35:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:28.130 06:35:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:28.130 06:35:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:28.130 06:35:47 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:28.130 06:35:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:28.130 06:35:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:28.130 06:35:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:28.130 + [[ -n 676112 ]] 00:45:28.130 + sudo kill 676112 00:45:28.140 [Pipeline] } 00:45:28.156 [Pipeline] // stage 00:45:28.160 [Pipeline] } 00:45:28.175 [Pipeline] // timeout 00:45:28.179 [Pipeline] } 00:45:28.193 [Pipeline] // catchError 00:45:28.198 [Pipeline] } 00:45:28.212 [Pipeline] // wrap 00:45:28.218 [Pipeline] } 00:45:28.231 [Pipeline] // catchError 00:45:28.239 [Pipeline] stage 00:45:28.242 [Pipeline] { (Epilogue) 00:45:28.254 [Pipeline] catchError 00:45:28.256 [Pipeline] { 00:45:28.268 [Pipeline] echo 00:45:28.270 Cleanup processes 00:45:28.275 [Pipeline] sh 00:45:28.563 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:28.563 1339216 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:28.576 [Pipeline] sh 00:45:28.862 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:28.862 ++ grep -v 'sudo pgrep' 00:45:28.862 ++ awk '{print $1}' 00:45:28.862 + sudo kill -9 00:45:28.862 + true 00:45:28.873 [Pipeline] sh 00:45:29.157 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:41.380 [Pipeline] sh 00:45:41.664 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:41.664 Artifacts sizes are good 00:45:41.677 [Pipeline] archiveArtifacts 00:45:41.684 Archiving artifacts 00:45:41.847 [Pipeline] sh 00:45:42.131 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:42.145 [Pipeline] cleanWs 00:45:42.155 [WS-CLEANUP] Deleting project workspace... 00:45:42.156 [WS-CLEANUP] Deferred wipeout is used... 00:45:42.162 [WS-CLEANUP] done 00:45:42.164 [Pipeline] } 00:45:42.182 [Pipeline] // catchError 00:45:42.193 [Pipeline] sh 00:45:42.474 + logger -p user.info -t JENKINS-CI 00:45:42.482 [Pipeline] } 00:45:42.496 [Pipeline] // stage 00:45:42.501 [Pipeline] } 00:45:42.515 [Pipeline] // node 00:45:42.521 [Pipeline] End of Pipeline 00:45:42.573 Finished: SUCCESS